WebGPU

W3C 候选推荐草案,

关于本文档的更多信息
本版本:
https://www.w3.org/TR/2025/CRD-webgpu-20250711/
最新发布版本:
https://www.w3.org/TR/webgpu/
编辑草案:
https://gpuweb.github.io/gpuweb/
先前版本:
历史:
https://www.w3.org/standards/history/webgpu/
反馈:
public-gpu@w3.org 邮件主题请写为 “[webgpu] …消息主题…” (存档)
GitHub
编辑者:
(Google)
(Google)
(Mozilla)
前任编辑者:
(Apple Inc.)
(Mozilla)
(Apple Inc.)
参与:
提交问题 (开放问题)
测试套件:
WebGPU CTS

摘要

WebGPU 提供了一个 API,可用于在图形处理单元(GPU)上执行渲染与计算等操作。

本文件状态

本节描述了本文档在发布时的状态。当前 W3C 出版物和本技术报告的最新修订可在 W3C 标准与草案索引 查询。

欢迎对本规范提出反馈和意见。建议优先通过 GitHub Issues 参与讨论。你也可以向 GPU for the Web 工作组邮件列表 public-gpu@w3.org存档)发送评论。本草案突出了工作组仍需讨论的一些待决问题,目前尚未就其结果作出决定,包括其有效性。

本文档由 GPU for the Web 工作组推荐流程 作为候选推荐草案发布。为确保有充分的广泛评审机会,本文件将至少保持候选推荐状态至

工作组期望在至少两个已部署的浏览器中基于现代 GPU 系统 API 演示每项特性的实现。测试套件将用于编写实现报告。

作为候选推荐发布并不代表 W3C 及其成员的认可。候选推荐草案整合了工作组打算在后续推荐快照中包含的前一候选推荐的更改。

本文档可随时维护和更新,其中部分内容尚在完善中。

本文档由遵循 W3C 专利政策 的工作组编写。W3C 维护了与本组交付物相关的 专利公开列表,该页面也包括专利披露说明。个人如获知包含 必要权利要求 的专利,应依 W3C 专利政策第6节 披露相关信息。

本文档受 2023年11月3日 W3C 流程文档 管辖。

1. 引言

本节为非规范性内容。

图形处理单元(GPU)在个人计算领域中已成为实现丰富渲染和计算应用的关键。WebGPU 是一个为 Web 暴露 GPU 硬件能力的 API。该 API 从零开始设计,能够高效映射到(2014年之后的)原生 GPU API。WebGPU 与 WebGL 无关,也不直接针对 OpenGL ES。

WebGPU 将物理 GPU 硬件视为 GPUAdapter。它通过 GPUDevice 提供与适配器的连接,设备负责管理资源,并通过设备的 GPUQueue 执行命令。GPUDevice 可能拥有自己的高速存储器以供处理单元访问。GPUBufferGPUTexture 是由 GPU 内存支持的物理资源GPUCommandBufferGPURenderBundle 是用户录制命令的容器。GPUShaderModule 包含 着色器代码。其他资源如 GPUSamplerGPUBindGroup 用于配置 GPU 如何使用物理资源

GPU 通过 GPUCommandBuffer 编码的命令,通过 管线 执行,该管线是固定功能与可编程阶段的混合体。可编程阶段运行着色器,即专为 GPU 硬件设计的特殊程序。管线的大部分状态由 GPURenderPipelineGPUComputePipeline 对象定义。未包含在这些 管线对象中的状态,则在命令编码阶段通过如 beginRenderPass()setBlendConstant() 之类命令设置。

2. 恶意使用考量

本节为非规范性内容。 介绍了在 Web 上暴露此 API 所带来的风险。

2.1. 安全考量

WebGPU 的安全要求与 Web 一贯的要求相同,也是不可协商的。总体原则是:在所有命令到达 GPU 之前进行严格校验,确保页面只能操作自身数据。

2.1.1. 基于 CPU 的未定义行为

WebGPU 实现会将用户发起的工作量转换为目标平台特定的 API 命令。原生 API 规定了命令的有效用法(例如见 vkCreateDescriptorSetLayout),并且通常不保证不遵守有效用法规则时的任何结果。这被称为“未定义行为”,攻击者可以利用它访问非授权内存,或迫使驱动执行任意代码。

为禁止不安全用法,WebGPU 针对任意输入都定义了允许的行为范围。实现必须校验所有用户输入,只允许有效工作负载到达驱动。本规范规定了所有错误条件及其处理语义。例如,在 copyBufferToBuffer() 的 "source" 和 "destination" 同时指定同一缓冲区,且区间相交时,GPUCommandEncoder 会生成错误,且不会执行其它操作。

更多错误处理信息参见 § 22 错误与调试

2.1.2. 基于 GPU 的未定义行为

WebGPU 着色器 由 GPU 内部的计算单元执行。在原生 API 中,某些着色器指令可能在 GPU 上导致未定义行为。为应对这一点,WebGPU 严格定义了着色器指令集及其行为。当为 createShaderModule() 提供着色器时,WebGPU 实现必须在进行任何平台特定着色器转换或变换之前进行校验。

2.1.3. 未初始化数据

一般来说,分配新内存可能暴露系统上其它应用残留的数据。为避免此问题,WebGPU 在概念上会将所有资源初始化为零,尽管若实现检测到开发者已手动初始化内容时可跳过此步骤。这包括着色器内变量和共享工作组内存。

清除工作组内存的具体机制依平台而异。若原生 API 未提供清除机制,WebGPU 实现会先在所有调用中清空,再同步,然后继续执行开发者代码。

注:
资源在队列操作中被使用时,其初始化状态只有在操作入队时才能确定(而不是在命令缓冲区编码时)。因此,一些实现会在入队时执行非优化的延迟清除(如清除纹理,而不是将 GPULoadOp "load" 改为 "clear")。

因此,无论实现是否有性能损耗,所有实现应当在开发者控制台发出警告。

2.1.4. 着色器中的越界访问

着色器可以直接访问物理资源(如 "uniform" GPUBufferBinding),也可以通过纹理单元(为纹理坐标转换而设的固定功能硬件块)间接访问。WebGPU API 的校验只能保证所有着色器输入已提供且类型正确。如果未通过纹理单元访问,WebGPU API 无法保证数据访问不会越界。

为防止着色器访问非本应用 GPU 内存,WebGPU 实现可在驱动中启用“健壮缓冲区访问”模式,确保访问不会超出缓冲区界限。

或者,实现可在着色器代码中插入手动越界检查。此时,越界检查仅适用于数组索引,对结构体字段访问则无需手动检查,因为主机侧 minBindingSize 校验已覆盖。

若着色器尝试读取物理资源边界外的数据,实现可以:

  1. 返回资源边界内其他位置的值

  2. 返回值向量 "(0, 0, 0, X)",其中 X 可为任意值

  3. 部分丢弃绘制或调度调用

若着色器尝试写入物理资源边界外,实现可以:

  1. 写入资源边界内其他位置

  2. 丢弃写入操作

  3. 部分丢弃绘制或调度调用

2.1.5. 无效数据

从 CPU 向 GPU 上传 浮点数数据或在 GPU 上生成浮点数时,可能会出现不对应于有效数值的二进制表示(如无穷大或 NaN)。此时 GPU 行为取决于硬件对 IEEE-754 标准的实现精度。WebGPU 保证引入无效浮点数只会影响算术运算结果,不会有其他副作用。

2.1.6. 驱动漏洞

GPU 驱动程序和其他软件一样可能存在漏洞。攻击者可能利用驱动的错误行为访问非授权数据。为降低风险,WebGPU 工作组将与 GPU 厂商协作,将 WebGPU 一致性测试套件(CTS)集成到驱动测试流程中,类似 WebGL 的做法。WebGPU 实现应对已发现的部分漏洞有兼容性处理,并对无法绕过的已知漏洞驱动禁用 WebGPU。

2.1.7. 时序攻击

2.1.7.1. 内容时间线时序

WebGPU 不会向 JavaScript 暴露在 内容时间线上由 agentagent cluster 中共享的新状态。内容时间线状态如 [[mapping]] 仅在显式 内容时间线任务(如普通 JavaScript)中变化。

2.1.7.2. 设备/队列时间线时序

可写存储缓冲区和其他跨调用通信机制可能用于在队列时间线上构造高精度定时器。

可选的 "timestamp-query" 特性也为 GPU 操作提供高精度计时。为缓解安全和隐私风险,定时查询的值被对齐到较低精度:参见 current queue timestamp。特别注意:

2.1.8. Row hammer 攻击

Row hammer 是一类利用 DRAM 单元状态泄漏的攻击,可被用于 GPU。WebGPU 没有专门的防护措施,依赖于平台级解决方案,例如缩短内存刷新间隔。

2.1.9. 拒绝服务

WebGPU 应用可访问 GPU 内存和计算单元。WebGPU 实现可以限制应用可用的 GPU 内存,以保证其他应用的响应性。对于 GPU 处理时间,WebGPU 实现可以设置“看门狗”定时器,确保应用不会导致 GPU 无响应超过数秒。这些措施类似于 WebGL 所采用的策略。

2.1.10. 工作负载识别

WebGPU 提供对全局受限资源的访问,这些资源由同一台机器上不同程序(和网页)共享。应用可通过间接探测全局资源受限程度,进而推测其他网页正在进行的工作负载。这些问题与 Javascript 中的系统内存和 CPU 执行吞吐量类似。WebGPU 没有对此提供额外的缓解措施。

2.1.11. 内存资源

WebGPU 允许从机器全局内存堆(如 VRAM)进行可失败的分配。这使得应用可以通过尝试分配并观察分配失败情况,探测系统剩余可用内存(针对特定堆类型)。

GPU 内部一般有一个或多个(通常只有两个)所有运行应用共享的内存堆。当某个堆耗尽时,WebGPU 创建资源会失败。这是可观察到的,可能使恶意应用推测出其他应用使用了哪些堆,以及它们分配了多少。

2.1.12. 计算资源

如果一个站点与另一个站点同时使用 WebGPU,它可能会观察到处理某些工作的耗时增加。例如,持续向队列提交计算工作并跟踪完成时间,可能会发现其他任务也开始使用 GPU。

GPU 有多个可独立测试的部分,如算术单元、纹理采样单元、原子单元等。恶意应用可以检测某些单元的负载,并试图分析压力模式来猜测其他应用的工作负载。这与 Javascript 的 CPU 执行现实类似。

2.1.13. 能力滥用

恶意站点可能滥用 WebGPU 所暴露的能力,运行对用户或其体验无益、仅对站点有利的计算。例如隐蔽挖矿、密码破解或彩虹表计算。

无法针对这类 API 使用方式加以防范,因为浏览器无法区分有效负载和滥用负载。这是 Web 上所有通用计算能力(如 JavaScript、WebAssembly 或 WebGL)面临的普遍问题。WebGPU 只是让某些负载的实现更容易,或比 WebGL 更高效。

为缓解此类滥用,浏览器可对后台标签页的操作进行限速,可警告标签页资源占用过高,并可限制哪些上下文允许使用 WebGPU。

用户代理可基于启发式方法向用户发出高功耗警告,尤其在检测到潜在恶意使用时。如果实现此类警告,应将 WebGPU 使用纳入与 JavaScript、WebAssembly、WebGL 等相同的启发式中。

2.2. 隐私考量

There is a tracking vector here. WebGPU 的隐私考量与 WebGL 类似。GPU API 十分复杂,必须出于必要暴露设备能力的各个方面,以便开发者能够有效利用这些能力。一般的缓解方法是对潜在可识别信息进行归一化或分桶,并在可能的情况下强制行为一致。

用户代理不得暴露超过 32 种可区分配置或分桶。

2.2.1. 机器特有功能与限制

WebGPU 可以揭示底层 GPU 架构和设备结构的许多细节,包括可用的物理适配器、GPU 和 CPU 资源的多项限制(如最大纹理尺寸),以及可用的任何可选硬件专属能力。

用户代理没有义务暴露真实硬件限制,可以完全控制机器细节的暴露程度。减少指纹识别的一种策略是将所有目标平台归为少数几个分桶。总体来说,暴露硬件限制的隐私影响与 WebGL 相同。

默认限制值也被故意设置得足够高,以便大多数应用无需请求更高限制即可运行。所有 API 的使用都按请求限制进行校验,因此不会因意外而向用户暴露实际硬件能力。

2.2.2. 机器特有产物

和 WebGL 一样,可以观察到一些机器特有的光栅化/精度差异和性能差异。这涉及光栅化覆盖及模式、着色器阶段间 varyings 的插值精度、计算单元调度,以及更多执行相关特性。

通常,同一厂商的大多数或全部设备的光栅化和精度指纹是相同的。性能差异难以完全规避,但信号强度较低(如 JS 执行性能)。

对隐私要求高的应用和用户代理应采用软件实现以消除这类产物。

2.2.3. 机器特有性能

通过测量 GPU 上特定操作的性能也是区分用户的一个因素。即便计时精度较低,重复执行某一操作也能体现用户机器在特定负载下的快慢。这是常见的识别向量(WebGL 和 Javascript 皆有),但信号较低且难以完全规避。

WebGPU 计算管线让开发者可绕过固定功能硬件直接访问 GPU,这带来了独特设备指纹识别的额外风险。用户代理可通过将逻辑 GPU 调用与实际计算单元解耦来降低此风险。

2.2.4. 用户代理状态

本规范未为源定义任何额外用户代理状态。但预期用户代理会对 GPUShaderModuleGPURenderPipelineGPUComputePipeline 等高开销编译结果进行缓存。这些缓存有助于提升 WebGPU 应用首次访问后的加载速度。

对规范而言,这些缓存与极快编译无异,但应用可以轻易测量 createComputePipelineAsync() 的耗时。这可能导致跨源信息泄露(如“用户是否访问过包含特定着色器的站点”),因此用户代理应遵循 存储分区最佳实践。

系统的 GPU 驱动也可能有自己的着色器和管线编译缓存。用户代理可在可能的情况下禁用该缓存,或为着色器添加分区数据,使驱动将其视为不同对象。

2.2.5. 驱动漏洞

安全考量中提到的问题外,驱动漏洞还可能导致行为差异,成为区分用户的手段。此处可采用安全考量中提及的缓解措施,如与 GPU 厂商协作,在用户代理中实现已知问题的兼容处理。

2.2.6. 适配器标识符

WebGL 实践表明,开发者有合理需求识别其代码运行的 GPU,以便创建和维护健壮的 GPU 内容。例如识别有已知驱动漏洞的适配器,以便规避或避免在某些硬件上使用表现不佳的特性。

但暴露适配器标识符自然会增加指纹识别信息量,因此有必要限制适配器识别的精度。

可以采取多项缓解措施以平衡内容健壮性与隐私保护。首先,用户代理可通过主动识别和规避已知驱动问题,减少开发者负担,就像浏览器开始用 GPU 以来所做的一样。

默认暴露适配器标识符时应尽量宽泛,只要有用即可。例如,可仅标识适配器厂商及架构而非具体型号。有时也可报告实际适配器的合理代理的标识符。

在需要完整详细适配器信息(如提交 bug 报告)时,可请用户同意向页面披露更多硬件信息。

最后,若用户代理认为合适(如增强隐私模式下),可完全不报告适配器标识符。

3. 基础

3.1. 约定

3.1.1. 语法速记

本规范中使用了如下语法速记:

.(点)语法,常见于编程语言。

短语“Foo.Bar”表示“值(或接口)FooBar 成员”。如果 Foo有序映射(ordered map)BarFoo 中未存在,则返回 undefined

短语“Foo.Bar提供”表示“Bar 成员存在映射Foo 中”。

?.(可选链)语法,借鉴自 JavaScript。

短语“Foo?.Bar”表示:“如果 FoonullundefinedBarFoo 中未存在,则为 undefined;否则为 Foo.Bar”。

例如,若 buffer 是一个 GPUBuffer,则 buffer?.\[[device]].\[[adapter]] 表示:“如果 buffernullundefined,则为 undefined;否则为 buffer\[[device]] 内部槽的 \[[adapter]] 内部槽。”

??(空值合并)语法,借鉴自 JavaScript。

短语“x ?? y”表示:“如果 x 不为 null 或 undefined,则为 x,否则为 y。”

槽支持属性(slot-backed attribute)

由同名内部槽支持的 WebIDL 属性。其可变性视规范而定。

3.1.2. WebGPU 对象

WebGPU 对象WebGPU 接口内部对象组成。

WebGPU 接口定义了WebGPU 对象的公共接口和状态。它可在其创建的内容时间线上使用,此时它是 JavaScript 暴露的 WebIDL 接口。

任何包含 GPUObjectBase 的接口都是 WebGPU 接口

内部对象跟踪WebGPU 对象设备时间线上的状态。所有对内部对象可变状态的读写都发生在单一有序的设备时间线步骤中。

WebGPU 对象上可定义以下特殊属性类型:

不可变属性(immutable property)

在对象初始化时设定的只读槽,可从任意时间线访问。

注:由于该槽不可变,实现可在多个时间线中保有副本,按需分配。不可变属性这样定义是为避免在规范中描述多个副本。

若命名为 [[带括号]],则是内部槽。
若命名为 不带括号,则是 WebGPU 接口readonly 槽支持属性

内容时间线属性(content timeline property)

仅可从对象创建的内容时间线访问的属性。

若命名为 [[带括号]],则是内部槽。
若命名为 不带括号,则是 WebGPU 接口槽支持属性

设备时间线属性(device timeline property)

跟踪内部对象状态,仅可从对象创建的设备时间线访问。设备时间线属性可为可变的。

设备时间线属性命名为 [[带括号]],为内部槽。

队列时间线属性(queue timeline property)

跟踪内部对象状态,仅可从对象创建的队列时间线访问。队列时间线属性可为可变的。

队列时间线属性命名为 [[带括号]],为内部槽。

interface mixin GPUObjectBase {
    attribute USVString label;
};
创建一个新的 WebGPU 对象GPUObjectBase parent,接口 TGPUObjectDescriptorBase descriptor)(其中 T 继承自 GPUObjectBase),请在内容时间线上执行以下步骤:
  1. deviceparent.[[device]]

  2. objectT 的新实例。

  3. 设置 object.[[device]]device

  4. 设置 object.labeldescriptor.label

  5. 返回 object

GPUObjectBase 具有如下不可变属性

[[device]],类型为device,只读

拥有该内部对象设备

对此对象内容的操作断言其运行在设备时间线上,并且设备是有效的。

GPUObjectBase 具有如下内容时间线属性

label类型为 USVString

开发者提供的标签,由实现自定义使用。可由浏览器、操作系统或其他工具用于帮助开发者识别底层内部对象。示例包括在 GPUError 消息、控制台警告、浏览器开发者工具和平台调试工具中显示标签。

注:
实现应当利用标签提升错误消息的可读性,用于标识 WebGPU 对象。

但这不必是标识对象的唯一方式:实现还应结合其他可用信息,尤其是在没有标签时。例如:

注:
labelGPUObjectBase 的属性。两个 GPUObjectBase “包装器”对象即使引用同一底层对象,其 label 状态也是完全独立的(如由 getBindGroupLayout() 返回时)。除非通过 JavaScript 设置,否则 label 属性不会变更。

这意味着一个底层对象可关联多个标签。本规范未定义标签如何传递到 设备时间线。标签的使用完全由实现自定义:错误消息可显示最近设置的标签、所有已知标签,或完全不显示标签。

属性类型为 USVString,因为某些用户代理可能将其传递给底层原生 API 的调试工具。

GPUObjectBase 具有如下设备时间线属性

[[valid]],类型为boolean,初始值为 true

若为 true,表示该内部对象可用。

注:
理想情况下,WebGPU 接口不应阻止其父对象(如拥有它的 [[device]])被垃圾回收。然而,这无法保证,因为某些实现可能需要强引用父对象。

因此,开发者应假定 WebGPU 接口在所有子对象被回收前可能不会被垃圾回收。这可能导致部分资源比预期保留更长时间。

如果需要可预测地释放分配的资源,应优先调用 destroy 方法(如 GPUDevice.destroy()GPUBuffer.destroy()),而不是依赖垃圾回收。

3.1.3. 对象描述符

对象描述符用于保存创建对象所需的信息,通常通过 GPUDevicecreate* 方法之一进行对象创建。

dictionary GPUObjectDescriptorBase {
    USVString label = "";
};

GPUObjectDescriptorBase 具有如下成员:

label类型为 USVString,默认值为 ""

GPUObjectBase.label 的初始值。

3.2. 异步性

3.2.1. 无效的内部对象与传染性无效

WebGPU 中的对象创建操作不会返回 Promise,但其本质上是异步的。返回的对象引用的是在设备时间线上被操作的内部对象。多数在设备时间线上发生的错误不会通过异常或拒绝抛出,而是通过关联设备上生成的 GPUError 进行传递。

内部对象可以是有效无效无效对象不会在之后变为有效,但部分有效对象可以被使无效(invalidated)

如果对象无法被创建,则其从创建开始即为无效。例如,对象描述符未能描述出有效对象,或者没有足够内存分配资源时会发生此情况。如果从另一个无效对象创建对象(例如对无效的 GPUTexture 调用 createView()),也会如此。这类情况称为传染性无效

大多数类型的内部对象在创建后不会变为无效,但仍可能变得不可用,例如其所属设备丢失destroyed,或对象处于特殊内部状态(如缓冲区状态“destroyed”)。

部分类型的内部对象在创建后可以变为无效;具体包括设备适配器GPUCommandBuffer,以及命令/通道/捆绑编码器。

给定 GPUObjectBase object,若 object.[[valid]]true,则为有效
给定 GPUObjectBase object,若 object.[[valid]]false,则为无效
给定 GPUObjectBase object,若满足以下设备时间线步骤的所有要求,则其可与 targetObject 一同使用(valid to use with)
使无效(invalidate) GPUObjectBase object,请在设备时间线上执行以下步骤:
  1. object.[[valid]] 设为 false

3.2.2. Promise 顺序

WebGPU 中有若干操作会返回 promise。

WebGPU 不保证这些 promise 的 settle(resolve 或 reject)顺序,除非有如下例外:

应用不得依赖于其它 promise 的 settle 顺序。

3.3. 坐标系统

渲染操作使用如下坐标系统:

注:WebGPU 的坐标系统与 DirectX 图形管线中的坐标系统一致。

3.4. 编程模型

3.4.1. 时间线

WebGPU 的行为以“时间线”为单位进行描述。每个操作(以算法定义)都发生在某个时间线上。时间线明确了操作的顺序,以及哪些状态可被哪些操作访问。

注: 这种“时间线”模型反映了浏览器引擎多进程模型(如“内容进程”和“GPU 进程”)的约束,也体现了许多实现中 GPU 作为独立执行单元的现实。实现 WebGPU 并不要求时间线并行执行,因此不要求多进程,甚至不要求多线程。(但如 获取上下文图像内容副本 这类需同步阻塞其它时间线完成的情况,仍需支持并发。)

内容时间线

与 Web 脚本的执行相关。包括本规范描述的所有方法调用。

要从 GPUDevice device 的操作向内容时间线派发步骤,可通过 为 GPUDevice 排队全局任务

设备时间线

与用户代理发起的 GPU 设备操作相关。包括适配器、设备、GPU 资源与状态对象的创建,这些操作从用户代理控制 GPU 的部分看通常是同步的,但可在独立的操作系统进程中运行。

队列时间线

与 GPU 计算单元上操作的执行相关。包括实际在 GPU 上运行的绘制、拷贝和计算任务。

时间线无关

可以与上述任意时间线关联。

如仅操作不可变属性或调用步骤传入的参数,则可派发到任意时间线。

以下展示了与各时间线关联的步骤和值的样式。该样式为非规范性内容,规范文本始终明确描述了其关联。
不可变值示例术语定义

可用于任意时间线。

内容时间线示例术语定义

仅可用于内容时间线

设备时间线示例术语定义

仅可用于设备时间线

队列时间线示例术语定义

仅可用于队列时间线

时间线无关的步骤如下所示。

不可变值示例术语的用法。

内容时间线上执行的步骤如下所示。

不可变值示例术语的用法。 内容时间线示例术语的用法。

设备时间线上执行的步骤如下所示。

不可变值示例术语的用法。 设备时间线示例术语的用法。

队列时间线上执行的步骤如下所示。

不可变值示例术语的用法。 队列时间线示例术语的用法。

本规范中,当返回值依赖于非内容时间线上发生的工作时,采用异步操作。API 以 promise 或事件的形式表现异步操作。

GPUComputePassEncoder.dispatchWorkgroups() 示例:
  1. 用户在 内容时间线 上通过 GPUComputePassEncoder 的方法编码 dispatchWorkgroups 命令。

  2. 用户调用 GPUQueue.submit(),将 GPUCommandBuffer 提交给用户代理,用户代理在 设备时间线 上调用操作系统驱动进行底层提交。

  3. GPU 调度器在 队列时间线 上将提交分派到实际计算单元执行。

GPUDevice.createBuffer() 示例:
  1. 用户填写 GPUBufferDescriptor,并用其创建 GPUBuffer,在 内容时间线 上发生。

  2. 用户代理在 设备时间线 上创建底层缓冲区。

GPUBuffer.mapAsync() 示例:
  1. 用户在 内容时间线 上请求映射 GPUBuffer,并获得一个 promise。

  2. 用户代理检查该缓冲区是否正被 GPU 使用,并设置提醒在使用结束后再次检查。

  3. 在 GPU 于 队列时间线 上使用完缓冲区后,用户代理将其映射到内存并 resolve 该 promise。

3.4.2. 内存模型

本节为非规范性内容。

一旦应用初始化过程中获取到 GPUDevice,我们可以将 WebGPU 平台描述为包含以下各层:

  1. 实现本规范的用户代理。

  2. 为该设备提供底层原生 API 驱动的操作系统。

  3. 实际的 CPU 和 GPU 硬件。

WebGPU 平台的每一层可能有不同类型的内存,用户代理在实现规范时需要加以考虑:

大多数物理资源分配在适合 GPU 计算或渲染的内存类型中。当用户需要向 GPU 提供新数据时,数据可能首先需要跨越进程边界,抵达与 GPU 驱动通信的用户代理部分。随后可能需要使数据对驱动可见,有时这需要拷贝到驱动分配的中转(staging)内存。最后,数据还可能需要传输到独立 GPU 内存中,并在内部转换为最适合 GPU 运算的布局。

所有这些转换都由用户代理的 WebGPU 实现负责完成。

注:上述示例描述了最坏情况,实际实现中可能无需跨越进程边界,或者能直接向用户暴露由驱动管理的内存(例如通过 ArrayBuffer),从而避免数据拷贝。

3.4.3. 资源用法

物理资源可以被GPU 命令内部用法使用:

输入(input)

为 draw 或 dispatch 调用提供输入数据的缓冲区。会保留内容。由缓冲区 INDEXVERTEXINDIRECT 允许。

常量(constant)

从着色器视角为常量的资源绑定。会保留内容。由缓冲区 UNIFORM 或纹理 TEXTURE_BINDING 允许。

存储(storage)

读/写存储资源绑定。由缓冲区 STORAGE 或纹理 STORAGE_BINDING 允许。

存储只读(storage-read)

只读存储资源绑定。会保留内容。由缓冲区 STORAGE 或纹理 STORAGE_BINDING 允许。

附件(attachment)

在渲染通道中作为读/写输出附件或仅写解决目标的纹理。由纹理 RENDER_ATTACHMENT 允许。

附件只读(attachment-read)

在渲染通道中作为只读附件的纹理。会保留内容。由纹理 RENDER_ATTACHMENT 允许。

子资源(subresource)指整个缓冲区或纹理子资源

某些内部用法彼此兼容。子资源(subresource)可以处于多个用法组合的状态。若列表 U 满足下列任一规则,则称其为兼容用法列表(compatible usage list)

强制要求用法仅可组合为兼容用法列表,使得 API 能够限制内存操作中的数据竞争出现时机。该属性使基于 WebGPU 的应用更容易无修改地在不同平台上运行。

示例:
在同一 GPURenderPassEncoder 中,将同一缓冲区既作为存储,又作为输入绑定,会导致该缓冲区的用法列表为非兼容用法列表
示例:
这些规则允许只读深度模板:单个深度/模板纹理可在渲染通道中同时以两种只读用法使用:
示例:
存储用法例外允许两种原本不被允许的情况:
示例:
附件用法例外允许一个纹理子资源多次作为附件使用。这对于允许 3D 纹理的不相交切片作为不同附件绑定到同一渲染通道很有必要。

但同一切片不得作为两个不同附件重复绑定;这由 beginRenderPass() 检查。

3.4.4. 同步机制

用法范围(usage scope)是一个从有序映射,其键为子资源,值为 列表<内部用法>。每个用法范围覆盖一组可能并发执行的操作,因此在该范围内对子资源的使用只能是兼容用法列表

若对于 scope 中每个 [subresource, usageList],usageList 均为兼容用法列表,则 用法范围 scope 通过用法范围校验
要向 使用作用域(usage scope) usageScope 添加(add)一个 子资源(subresource) subresource,并指定 usage(内部用法或一组内部用法usage
  1. 如果 usageScope[subresource] 不存在,则将其设为 []

  2. 追加 usageusageScope[subresource]。

要将 使用作用域(usage scope) A 合并(merge)使用作用域(usage scope) B
  1. 对于 A 中的每一项 [subresource, usage]:

    1. 添加(Add) subresourceB,并指定 usage usage

用法范围在编码期间被构建和校验:

用法范围如下:

注:拷贝命令为独立操作,不使用用法范围进行校验,其自身实现了防止自竞争的校验。

示例:
以下资源用法被计入用法范围

3.5. 核心内部对象

3.5.1. 适配器(Adapters)

适配器(adapter)标识系统上的 WebGPU 实现:既包括底层平台上的计算/渲染功能实例,也包括浏览器在该功能之上实现 WebGPU 的实例。

适配器通过 GPUAdapter 暴露。

适配器并不唯一代表底层实现:多次调用 requestAdapter() 每次都会返回不同的适配器对象。

每个 适配器对象只能用于创建一个 设备(device):一旦成功调用 requestDevice(),该适配器的 [[state]] 变为 "consumed"。此外,适配器对象可能在任何时刻过期(expire)

注: 这样保证应用在创建设备时能使用最新的系统状态来选择适配器。也让多种场景(如首次初始化、因适配器移除重初始化、因 GPUDevice.destroy() 测试重初始化等)行为保持一致,提高健壮性。

若适配器以显著性能换取更广兼容性、更可预测行为或更好的隐私,则可视为回退适配器(fallback adapter)。不是每个系统都必须提供回退适配器。

适配器具有如下不可变属性

[[features]],类型为 有序集合<GPUFeatureName>,只读

可用于在此适配器上创建设备的特性

[[limits]],类型为支持的限制,只读

可用于在此适配器上创建设备的最佳限制。每个限制值必须与支持的限制中该项的默认值相同或更好(better)。

[[fallback]],类型为 boolean,只读

true 时,表示该适配器是回退适配器

[[xrCompatible]],类型为 boolean

true 时,表示该适配器请求了与 WebXR 会话 兼容。

适配器具有如下设备时间线属性

[[state]],初始值为 "valid"
"valid"

适配器可用于创建设备。

"consumed"

适配器已被用于创建设备,不能再次使用。

"expired"

适配器因其他原因已过期。

使 GPUAdapter 过期,请在设备时间线上执行:
  1. adapter.[[adapter]].[[state]] 设为 "expired"

3.5.2. 设备(Devices)

设备(device)适配器的逻辑实例,通过它可以创建内部对象

设备通过 GPUDevice 暴露。

一个设备独占其上创建的所有内部对象:当该设备变为无效丢失销毁时),它本身和所有直接(如 createTexture())或间接(如 createView())创建的对象都会隐式变为不可用

设备具有如下不可变属性

[[adapter]],类型为 适配器,只读

创建该设备的适配器

[[features]],类型为 有序集合<GPUFeatureName>,只读

可在该设备上使用的特性,在创建时确定。即使底层适配器支持更多特性,也不能使用额外特性。

[[limits]],类型为支持的限制,只读

可在该设备上使用的限制,在创建时确定。即使底层适配器支持更高限制,也不能使用更高值(better)。

设备具有如下内容时间线属性

[[content device]],类型为 GPUDevice,只读

该设备关联的内容时间线 GPUDevice 接口实例。

要从适配器 adapterGPUDeviceDescriptor descriptor 创建新设备,请执行以下设备时间线步骤:
  1. featuresdescriptor.requiredFeatures 中值的有序集合

  2. 如果 features 包含 "texture-formats-tier2"

    1. 追加 "texture-formats-tier1"features

  3. 如果 features 包含 "texture-formats-tier1"

    1. 追加 "rg11b10ufloat-renderable"features

  4. 追加 "core-features-and-limits"features

  5. limits 为所有值均为默认值的支持的限制对象。

  6. 对于 descriptor.requiredLimits 中每个 (key, value):

    1. 如果 value 不为 undefinedvalue 优于 limits[key]:

      1. limits[key] = value

  7. device 为新设备对象。

  8. device.[[adapter]] = adapter

  9. device.[[features]] = features

  10. device.[[limits]] = limits

  11. 返回 device

每当用户代理需要撤销设备访问权限时,会在该设备的设备时间线上调用 丢失设备(device, "unknown"),该操作可能优先于当前队列中的其他操作。

如果某操作失败且副作用可能导致设备上的对象状态可见变化或内部实现/驱动状态损坏,应当丢失该设备以防止这些变化被观察到。

注: 非应用主动发起的设备丢失(通过 destroy())时,用户代理应无条件向开发者发出警告,即使 lost promise 已被处理。此类场景应极其罕见,并且该信号对开发者至关重要,因为大多数 WebGPU API 会尽量表现为“一切正常”以避免中断运行时流程:不会抛出校验错误,大多数 promise 正常 resolve 等。

丢失设备(device, reason),请执行以下设备时间线步骤:
  1. 使 device 无效

  2. device.[[content device]]内容时间线上执行:

    1. 用新的 GPUDeviceLostInfo resolve device.lost,其中 reason 设为 reasonmessage 设为实现自定义值。

      注: message 不应泄露不必要的用户/系统信息,且绝不应用于应用程序解析。

  3. 完成所有等待 device 变为丢失的未完成步骤。

注:丢失的设备不会再生成错误。参见 § 22 错误与调试

监听时间线事件 event设备 device,并由 timeline 上的 steps 处理:

则在 timeline 上执行 steps

3.6. 可选功能

WebGPU 适配器设备具备功能,这些功能描述了WebGPU在不同实现之间的差异, 通常是由于硬件或系统软件的限制。 一项功能可以是特性限制

用户代理不得透露超过32个可区分的配置或分组。

适配器的功能必须符合§ 4.2.1 适配器功能保证

仅支持的功能可以在requestDevice()中请求; 请求不支持的功能会导致失败。

设备的功能在"一个新的设备"中确定, 通过从适配器的默认值(无特性和默认的支持的限制)开始, 并根据在requestDevice()中请求的功能添加功能。 这些功能无论适配器的功能如何均会被强制执行。

这里是追踪向量。 有关隐私方面的考虑,请参阅§ 2.2.1 机器特定的功能和限制

3.6.1. 特性

特性是一组可选的 WebGPU 功能,并非所有实现都支持这些功能,通常是由于硬件或系统软件的限制所致。

所有特性都是可选的,但适配器会对其可用性做出一定保证 (参见§ 4.2.1 适配器功能保证)。

设备仅支持在创建时确定的那组特性(参见§ 3.6 可选功能)。 API 调用将根据这些特性(而非适配器的特性)进行校验:

只有当 GPUObjectBaseobject[[device]].[[features]] 包含 feature 时,GPUFeatureName feature 才被认为是 启用的

每个特性所启用的功能描述可参见特性索引

注意: 启用特性未必是理想选择,可能会影响性能。 因此,为了提升不同设备和实现间的可移植性,应用程序通常只应请求实际需要的特性。

3.6.2. 限制

每个限制都是对设备上 WebGPU 使用的数值限制。

注意: 设置“更好”的限制未必是理想选择,因为这样可能会对性能产生影响。 因此,为了提升不同设备和实现之间的可移植性,应用程序通常只有在确实需要时才请求比默认值更好的限制。

每个限制都有一个默认值。

适配器始终保证支持默认值或更好的 (参见§ 4.2.1 适配器功能保证)。

设备仅支持在创建时确定的那组限制(参见§ 3.6 可选功能)。 API 调用根据这些限制进行校验(而非适配器的限制), 不允许“更好”或更差。

对于任何给定的限制,一些值是更好的更好的限制值始终放宽校验,从而使更多程序变得有效。 对于每个限制类别,“更好”的定义如下。

不同的限制具有不同的限制类别

最大值

限制在传递给 API 的某些值上强制设置一个最大值。

更高的值是更好的

仅能设置为 ≥ 默认值 的值。 较低的值会被限制为默认值

对齐

限制在传递给 API 的某些值上强制设置一个最小对齐;即该值必须是限制的倍数。

较低的值是更好的

仅能设置为 ≤ 默认值 的2的幂次。 非2的幂次值是无效的。 更高的2的幂次值会被限制为默认值

支持的限制 对象具有 WebGPU 定义的每个限制的值:

限制名称 类型 限制类别 默认值
maxTextureDimension1D GPUSize32 最大值 8192
创建dimension "1d"纹理时, size.width 的最大允许值。
maxTextureDimension2D GPUSize32 最大值 8192
创建dimension "2d"纹理时, size.widthsize.height 的最大允许值。
maxTextureDimension3D GPUSize32 最大值 2048
创建dimension "3d"纹理时, size.widthsize.heightsize.depthOrArrayLayers 的最大允许值。
maxTextureArrayLayers GPUSize32 最大值 256
创建dimension "2d"纹理时, size.depthOrArrayLayers 的最大允许值。
maxBindGroups GPUSize32 最大值 4
创建GPUPipelineLayout时, bindGroupLayouts 中允许的GPUBindGroupLayouts的最大数量。
maxBindGroupsPlusVertexBuffers GPUSize32 最大值 24
同时使用的绑定组和顶点缓冲区槽位的最大数量,包括所有低于最大索引的空槽位。 在createRenderPipeline()绘制调用中验证。
maxBindingsPerBindGroup GPUSize32 最大值 1000
创建GPUBindGroupLayout时可用的绑定索引数量。

注意: 此限制具有规范性,但属于人为设定。 按默认绑定槽位限制,一个绑定组实际上无法使用1000个绑定, 但这允许GPUBindGroupLayoutEntry.binding 的值达到999。这一限制允许实现把绑定空间当作数组而不是稀疏映射结构,便于内存管理。

maxDynamicUniformBuffersPerPipelineLayout GPUSize32 最大值 8
在一个GPUPipelineLayout中, 所有为动态偏移的uniform buffer的GPUBindGroupLayoutEntry条目的最大数量。 详见绑定槽位限制说明
maxDynamicStorageBuffersPerPipelineLayout GPUSize32 最大值 4
在一个GPUPipelineLayout中, 所有为动态偏移的storage buffer的GPUBindGroupLayoutEntry条目的最大数量。 详见绑定槽位限制说明
maxSampledTexturesPerShaderStage GPUSize32 最大值 16
对于每个GPUShaderStage stage,在一个GPUPipelineLayout中, 所有采样纹理的GPUBindGroupLayoutEntry条目的最大数量。 详见绑定槽位限制说明
maxSamplersPerShaderStage GPUSize32 最大值 16
对于每个GPUShaderStage stage,在一个GPUPipelineLayout中, 所有采样器的GPUBindGroupLayoutEntry条目的最大数量。 详见绑定槽位限制说明
maxStorageBuffersPerShaderStage GPUSize32 最大值 8
对于每个GPUShaderStage stage,在一个GPUPipelineLayout中, 所有存储缓冲区的GPUBindGroupLayoutEntry条目的最大数量。 详见绑定槽位限制说明
maxStorageTexturesPerShaderStage GPUSize32 最大值 4
对于每个可能的GPUShaderStage stage, 在一个GPUPipelineLayout中, 所有存储纹理类型GPUBindGroupLayoutEntry的最大数量。 详见绑定槽位限制说明
maxUniformBuffersPerShaderStage GPUSize32 最大值 12
对于每个可能的GPUShaderStage stage, 在一个GPUPipelineLayout中, 所有uniform buffer类型GPUBindGroupLayoutEntry的最大数量。 详见绑定槽位限制说明
maxUniformBufferBindingSize GPUSize64 最大值 65536 字节
绑定类型为 GPUBindGroupLayoutEntry ,且entry.buffer?.type"uniform" 时, GPUBufferBinding.size 的最大值。
maxStorageBufferBindingSize GPUSize64 最大值 134217728 字节(128 MiB)
绑定类型为 GPUBindGroupLayoutEntry ,且entry.buffer?.type"storage""read-only-storage" 时, GPUBufferBinding.size 的最大值。
minUniformBufferOffsetAlignment GPUSize32 对齐 256 字节
绑定类型为 GPUBindGroupLayoutEntry ,且entry.buffer?.type"uniform" 时, GPUBufferBinding.offset 以及 setBindGroup() 传入的动态偏移量所需的对齐要求。
minStorageBufferOffsetAlignment GPUSize32 对齐 256 字节
绑定类型为 GPUBindGroupLayoutEntry ,且entry.buffer?.type"storage""read-only-storage" 时, GPUBufferBinding.offset 以及 setBindGroup() 传入的动态偏移量所需的对齐要求。
maxVertexBuffers GPUSize32 最大值 8
创建GPURenderPipeline时,允许的buffers的最大数量。
maxBufferSize GPUSize64 最大值 268435456 字节(256 MiB)
创建GPUBuffer时,size的最大数值。
maxVertexAttributes GPUSize32 最大值 16
创建GPURenderPipeline时,所有attributesbuffers中的总和的最大数量。
maxVertexBufferArrayStride GPUSize32 最大值 2048 字节
创建GPURenderPipeline时,arrayStride的最大允许值。
maxInterStageShaderVariables GPUSize32 最大值 16
阶段间通信(如顶点输出、片元输入)可用的输入或输出变量的最大数量。
maxColorAttachments GPUSize32 最大值 8
GPURenderPipelineDescriptor.fragment.targetsGPURenderPassDescriptor.colorAttachmentsGPURenderPassLayout.colorFormats 中允许的最大颜色附件数量。
maxColorAttachmentBytesPerSample GPUSize32 最大值 32
渲染管线输出数据中,跨所有颜色附件存储一个采样(像素或子像素)所需的最大字节数。
maxComputeWorkgroupStorageSize GPUSize32 最大值 16384 字节
一个计算阶段GPUShaderModule入口点可用的workgroup存储的最大字节数。
maxComputeInvocationsPerWorkgroup GPUSize32 最大值 256
一个计算阶段GPUShaderModule入口点的workgroup_size各维度乘积的最大值。
maxComputeWorkgroupSizeX GPUSize32 最大值 256
一个计算阶段GPUShaderModule入口点的workgroup_size X 维度的最大值。
maxComputeWorkgroupSizeY GPUSize32 最大值 256
一个计算阶段GPUShaderModule入口点的workgroup_size Y 维度的最大值。
maxComputeWorkgroupSizeZ GPUSize32 最大值 64
一个计算阶段GPUShaderModule入口点的workgroup_size Z 维度的最大值。
maxComputeWorkgroupsPerDimension GPUSize32 最大值 65535
dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ)的参数的最大值。
3.6.2.1. GPUSupportedLimits

GPUSupportedLimits 用于暴露适配器或设备的支持的限制。 参见 GPUAdapter.limitsGPUDevice.limits

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedLimits {
    readonly attribute unsigned long maxTextureDimension1D;
    readonly attribute unsigned long maxTextureDimension2D;
    readonly attribute unsigned long maxTextureDimension3D;
    readonly attribute unsigned long maxTextureArrayLayers;
    readonly attribute unsigned long maxBindGroups;
    readonly attribute unsigned long maxBindGroupsPlusVertexBuffers;
    readonly attribute unsigned long maxBindingsPerBindGroup;
    readonly attribute unsigned long maxDynamicUniformBuffersPerPipelineLayout;
    readonly attribute unsigned long maxDynamicStorageBuffersPerPipelineLayout;
    readonly attribute unsigned long maxSampledTexturesPerShaderStage;
    readonly attribute unsigned long maxSamplersPerShaderStage;
    readonly attribute unsigned long maxStorageBuffersPerShaderStage;
    readonly attribute unsigned long maxStorageTexturesPerShaderStage;
    readonly attribute unsigned long maxUniformBuffersPerShaderStage;
    readonly attribute unsigned long long maxUniformBufferBindingSize;
    readonly attribute unsigned long long maxStorageBufferBindingSize;
    readonly attribute unsigned long minUniformBufferOffsetAlignment;
    readonly attribute unsigned long minStorageBufferOffsetAlignment;
    readonly attribute unsigned long maxVertexBuffers;
    readonly attribute unsigned long long maxBufferSize;
    readonly attribute unsigned long maxVertexAttributes;
    readonly attribute unsigned long maxVertexBufferArrayStride;
    readonly attribute unsigned long maxInterStageShaderVariables;
    readonly attribute unsigned long maxColorAttachments;
    readonly attribute unsigned long maxColorAttachmentBytesPerSample;
    readonly attribute unsigned long maxComputeWorkgroupStorageSize;
    readonly attribute unsigned long maxComputeInvocationsPerWorkgroup;
    readonly attribute unsigned long maxComputeWorkgroupSizeX;
    readonly attribute unsigned long maxComputeWorkgroupSizeY;
    readonly attribute unsigned long maxComputeWorkgroupSizeZ;
    readonly attribute unsigned long maxComputeWorkgroupsPerDimension;
};
3.6.2.2. GPUSupportedFeatures

GPUSupportedFeatures 是一个 类似集合(setlike) 接口。它的 集合条目 是适配器或设备支持的 GPUFeatureName 值。 它只能包含 GPUFeatureName 枚举中的字符串。

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedFeatures {
    readonly setlike<DOMString>;
};
注意:
GPUSupportedFeatures集合条目 类型为 DOMString, 这样可以让用户代理优雅地处理后续规范修订中新增但用户代理尚未识别的有效 GPUFeatureName。 如果 集合条目 的类型是 GPUFeatureName,如下代码将抛出 TypeError, 而不是返回 false
检查对未知特性的支持:
if (adapter.features.has('unknown-feature')) {
    // 使用 unknown-feature
} else {
    console.warn('unknown-feature is not supported by this adapter.');
}
3.6.2.3. WGSLLanguageFeatures

WGSLLanguageFeaturesnavigator.gpu.wgslLanguageFeatures类似集合(setlike) 接口。 它的 集合条目 是该实现支持的 WGSL 语言扩展 的字符串名称(不论适配器或设备)。

[Exposed=(Window, Worker), SecureContext]
interface WGSLLanguageFeatures {
    readonly setlike<DOMString>;
};
3.6.2.4. GPUAdapterInfo

GPUAdapterInfo 用于暴露关于适配器的各种标识信息。

GPUAdapterInfo 的成员不保证有任何特定值;如果没有提供值,该属性将返回空字符串 ""。是否揭示这些值由用户代理自行决定,并且某些设备上可能所有值都为空。因此,应用必须能够处理任何可能的 GPUAdapterInfo 值,包括这些值缺失的情况。

适配器的 GPUAdapterInfo 可通过 GPUAdapter.infoGPUDevice.adapterInfo 获取。 这些信息是不可变的:对于某个适配器,每次访问同一个 GPUAdapterInfo 属性都会返回相同的值。

注意: 虽然 GPUAdapterInfo 属性一旦访问就不可变, 但实现可以在首次访问前延后决定每个属性的内容。

注意: 即便其他 GPUAdapter 实例代表同一物理适配器, 它们在 GPUAdapterInfo 中揭示的值也可能不同。 但除非有特殊事件提升了页面可访问的标识信息(本规范未定义此类事件),否则它们应当揭示相同的值。

There is a tracking vector here. 关于隐私考虑,请参见 § 2.2.6 适配器标识符

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapterInfo {
    readonly attribute DOMString vendor;
    readonly attribute DOMString architecture;
    readonly attribute DOMString device;
    readonly attribute DOMString description;
    readonly attribute unsigned long subgroupMinSize;
    readonly attribute unsigned long subgroupMaxSize;
    readonly attribute boolean isFallbackAdapter;
};

GPUAdapterInfo 拥有如下属性:

vendor类型为 DOMString,只读

适配器的厂商名称(如有)。否则为空字符串。

architecture类型为 DOMString,只读

适配器所属 GPU 家族或类别的名称(如有)。否则为空字符串。

device类型为 DOMString,只读

适配器的厂商自定义标识符(如有)。否则为空字符串。

注意: 这是代表适配器类型的值。例如,可能是 PCI 设备ID。它不会像序列号一样唯一标识某块硬件。

description类型为 DOMString,只读

驱动报告的关于适配器的人类可读字符串描述(如有)。否则为空字符串。

注意: description 没有采用任何格式化,不建议尝试解析此值。根据 GPUAdapterInfo 改变行为的应用(如为已知驱动问题应用修正),应尽量依赖其他字段。

subgroupMinSize类型为 unsigned long,只读

如果支持 "subgroups" 特性,则为适配器支持的最小 子组大小

subgroupMaxSize类型为 unsigned long,只读

如果支持 "subgroups" 特性,则为适配器支持的最大 子组大小

isFallbackAdapter类型为 boolean,只读

适配器是否为回退适配器

要为指定的 adapter adapter 创建一个 新的适配器信息(new adapter info),请执行如下 内容时序 步骤:
  1. adapterInfo 为新的 GPUAdapterInfo

  2. 如果已知厂商,则将 adapterInfo.vendor 设为 adapter 的厂商名,并格式化为规范化标识符字符串。为保护隐私,用户代理也可以将 adapterInfo.vendor 设为空字符串,或一个合理近似的厂商名(同样为规范化标识符字符串)。

  3. 如果已知架构,则将 adapterInfo.architecture 设为代表 adapter 所属 GPU 家族或类别的规范化标识符字符串。为保护隐私,用户代理也可以设为空字符串,或合理近似的架构名(同样为规范化标识符字符串)。

  4. 如果已知设备,则将 adapterInfo.device 设为 adapter 的厂商自定义标识符,并格式化为规范化标识符字符串。为保护隐私,用户代理也可以设为空字符串,或合理近似的标识符(同样为规范化标识符字符串)。

  5. 如果已知描述,则将 adapterInfo.description 设为驱动报告的 adapter 描述。为保护隐私,用户代理也可以设为空字符串,或合理近似的描述。

  6. 如果支持 "subgroups" 特性,则将 subgroupMinSize 设为支持的最小子组大小;否则设为 4。

    注意: 为保护隐私,用户代理可以选择不支持某些特性,或为属性提供不会区分不同设备但依然可用的值(如对所有设备使用默认值 4)。

  7. 如果支持 "subgroups" 特性,则将 subgroupMaxSize 设为支持的最大子组大小;否则设为 128。

    注意: 为保护隐私,用户代理可以选择不支持某些特性,或为属性提供不会区分不同设备但依然可用的值(如对所有设备使用默认值 128)。

  8. adapterInfo.isFallbackAdapter 设为 adapter.[[fallback]]

  9. 返回 adapterInfo

规范化标识符字符串(normalized identifier string)指符合如下模式的字符串:

[a-z0-9]+(-[a-z0-9]+)*

a-z 0-9 -
合法规范化标识符字符串的示例包括:
  • gpu

  • 3d

  • 0x3b2f

  • next-gen

  • series-x20-ultra

3.7. 扩展文档

“扩展文档”是描述新功能的附加文档,这些功能是非规范性的,不属于 WebGPU/WGSL 规范的一部分。 它们描述了在这些规范基础上构建的功能,通常包含一个或多个新的 API 特性标志和/或 WGSL enable 指令,或与其他草案 Web 规范的交互。

WebGPU 实现不得暴露扩展功能;这样做属于规范违规。 新功能只有在被集成到 WebGPU 规范(本文档)和/或 WGSL 规范后,才成为 WebGPU 标准的一部分。

3.8. 源限制(Origin Restrictions)

WebGPU 允许访问存储在图片、视频和画布中的图像数据。 出于安全原因,对跨域媒体的使用施加了限制,因为着色器可以被用来间接推测已上传到 GPU 的纹理内容。

WebGPU 不允许上传 不是 origin-clean 的图片源。

这也意味着,使用 WebGPU 渲染的 canvas 的 origin-clean 标志永远不会被置为 false

关于为图片和视频元素发起 CORS 请求的更多信息,请参考:

3.9. 任务源(Task Sources)

3.9.1. WebGPU 任务源

WebGPU 定义了一个新的 任务源,称为 WebGPU 任务源。 它用于 uncapturederror 事件和 GPUDevice.lost

若要为 GPUDevice device 排队一个全局任务(queue a global task),并在 内容时序 上执行一系列步骤 steps
  1. WebGPU 任务源 上,为用于创建 device 的全局对象,按 steps 排队一个全局任务

3.9.2. 自动过期任务源

WebGPU 定义了一个新的 任务源,称为 自动过期任务源。 它用于自动、定时地销毁某些对象:

要用 GPUDevice device 和一系列步骤 steps内容时序排队一个自动过期任务
  1. 自动过期任务源 上,为用于创建 device 的全局对象,按 steps 排队一个全局任务

来自 自动过期任务源 的任务应当以高优先级处理;尤其是排入队列后,应当在用户自定义(JavaScript)任务之前执行。

注意:
这种行为更可预测,也能通过及早发现关于隐式生命周期的错误假设,帮助开发者编写更具移植性的应用程序。开发者仍然强烈建议在多个实现中进行测试。

实现说明: 以高优先级处理过期“任务”也可以通过在事件循环处理模型的某个固定点插入额外步骤来实现,而不一定非要运行实际的任务。

3.10. 色彩空间与编码

WebGPU 不提供色彩管理。WebGPU 内的所有值(如纹理元素)都是原始数值,不是经过色彩管理的色值。

WebGPU 确实与色彩管理的输出(通过 GPUCanvasConfiguration)和输入(通过 copyExternalImageToTexture()importExternalTexture())对接。因此,必须在 WebGPU 数值和外部色值之间进行色彩转换。每个接口点都会本地定义一种编码(色彩空间、传递函数和 alpha 预乘),用于解释 WebGPU 的数值。

WebGPU 允许 PredefinedColorSpace 枚举中的所有色彩空间。注意,每个色彩空间都定义了扩展范围(详见 CSS 相关定义),可以表示其空间外的颜色值(包括色度和明度)。

超出色域的预乘 RGBA 值指 R/G/B 其中任意一个通道的值大于 alpha 通道的值。例如,预乘 sRGB RGBA 值 [1.0, 0, 0, 0.5] 表示原始(未预乘)颜色 [2, 0, 0] 且 alpha 为 50%,在 CSS 中可写作 rgb(srgb 2 0 0 / 50%)。和所有 sRGB 色域外的颜色一样,这在扩展色彩空间中是有定义的点(alpha 为 0 时除外,此时没有颜色)。但若此类值输出到可见画布,结果未定义(见 GPUCanvasAlphaMode "premultiplied")。

3.10.1. 色彩空间转换

颜色在空间间转换时,需根据上述定义将其在一个空间中的表示转换为另一个空间中的表示。

如果源值少于4个 RGBA 通道,则缺失的绿/蓝/alpha 通道分别设为0, 0, 1,然后再做色彩空间/编码和 alpha 预乘转换。转换后,如果目标需要的通道数少于4,则忽略多余通道。

注意: 灰度图像通常在其色彩空间下表现为 RGB 值 (V, V, V) 或 RGBA 值 (V, V, V, A)

颜色在转换时不会被有损截断:若源色值本就在目标色彩空间的色域外,转换后会产生超出[0, 1]范围的值。例如,sRGB 目标下,如果源为 rgba16float,色彩空间更广(如 Display-P3),或为预乘且包含超出色域的预乘值,都可能出现上述情况。

同样,如果源值有高位深(如 16 位/分量 PNG)或扩展范围(如 float16 存储的 canvas),这些颜色会在转换过程中保留,且中间计算的精度不少于源。

3.10.2. 色彩空间转换省略

若源与目标的色彩空间/编码一致,则无需转换。通常,若某一步是恒等函数(无操作),实现应当出于性能考虑省略该步骤。

为获得最佳性能,应用应当设置色彩空间和编码选项,以最小化整个流程中所需的转换次数。 对于各种 GPUCopyExternalImageSourceInfo 图像源:

注意: 在依赖这些特性前,请检查浏览器的实现支持。

3.11. JavaScript 到 WGSL 的数值转换

WebGPU API 的多个部分(可覆盖管线 constants 和渲染通道清除值)会接收来自 WebIDL(doublefloat)的数值,并将其转换为 WGSL 值(booli32u32f32f16)。

将类型为 doublefloat 的 IDL 值 idlValue 转换为 WGSL 类型(to WGSL type) T,可能抛出 TypeError,请执行以下 设备时序 步骤:

注意:TypeError 只会在 设备时序 内产生,并不会抛到 JavaScript。

  1. 断言 idlValue 是有限值,因为它不是 unrestricted doubleunrestricted float

  2. vidlValue 转换为 ECMAScript 值 后得到的 ECMAScript Number。

  3. 如果 Tbool

    返回与 !v 转换为 IDL 类型 boolean 后结果对应的 WGSL bool 值。

    注意: 本算法是在从 ECMAScript 值到 IDL doublefloat 值的转换之后调用的。如果原始 ECMAScript 值为非数值、非布尔值(如 []{}),那么 WGSL bool 结果可能与直接转换为 IDL boolean 的结果不同。

    如果 Ti32

    返回 WGSL i32 值,该值对应于 ?v 转换为类型为 [EnforceRange] longIDL 值 的结果。

    如果 Tu32

    返回 WGSL u32 值,该值对应于 ?v 转换为类型为 [EnforceRange] unsigned longIDL 值 的结果。

    如果 Tf32

    返回 WGSL f32 值,该值对应于 ?v 转换为类型为 floatIDL 值 的结果。

    如果 Tf16
    1. wgslF32 为 WGSL f32 值,该值对应于 ?v 转换为类型为 floatIDL 值 的结果。

    2. 返回 f16(wgslF32),即 !WGSL 浮点数转换 所定义,将 WGSL f32 值转换为 f16 的结果。

    注意:只要值在 f32 的范围内,即使超出 f16 的范围,也不会抛出错误。

若要将 GPUColor color 转换为纹理格式的 texel 值(to a texel value of texture format) format,可能抛出 TypeError,请执行以下 设备时序 步骤:

注意:TypeError 只会在 设备时序 内产生,并不会抛到 JavaScript。

  1. format 的各分量类型(断言全部相同)为:

    浮点类型或归一化类型

    Tf32

    有符号整型

    Ti32

    无符号整型

    Tu32

  2. wgslColor 为 WGSL vec4<T>,其 4 个分量为 color 的 RGBA 通道,每个都 转换为 WGSL 类型 T

  3. § 23.2.7 输出合并 的同样规则将 wgslColor 转换为 format,并返回结果。

    注意: 对于非整型类型,实际取值为实现定义。对于归一化类型,值会被限制在该类型允许的范围内。

注意: 换言之,写入的值就如同由 WGSL 着色器以 f32i32u32 类型的 vec4 输出。

4. 初始化

WindowWorkerGlobalScope 上下文中,GPU 对象通过 NavigatorWorkerNavigator 接口暴露,访问方式为 navigator.gpu

interface mixin NavigatorGPU {
    [SameObject, SecureContext] readonly attribute GPU gpu;
};
Navigator includes NavigatorGPU;
WorkerNavigator includes NavigatorGPU;

NavigatorGPU 具有以下属性:

gpu类型为 GPU,只读

一个全局单例,提供诸如 requestAdapter() 的顶级入口点。

4.2. GPU

GPU 是 WebGPU 的入口点。

[Exposed=(Window, Worker), SecureContext]
interface GPU {
    Promise<GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {});
    GPUTextureFormat getPreferredCanvasFormat();
    [SameObject] readonly attribute WGSLLanguageFeatures wgslLanguageFeatures;
};

GPU 具有以下方法:

requestAdapter(options)

从用户代理请求一个 adapter。用户代理选择是否返回适配器,并根据提供的选项进行选择。

调用对象: GPU this.

参数:

参数表:GPU.requestAdapter(options) 方法。
参数 类型 可空 可选 描述
options GPURequestAdapterOptions 用于选择适配器的条件。

返回: Promise<GPUAdapter?>

内容时序步骤:

  1. contentTimeline 为当前 内容时序

  2. promise一个新的 promise

  3. this设备时序上执行 初始化步骤

  4. 返回 promise

设备时序 初始化步骤
  1. 以下步骤的所有要求必须满足。

    1. options.featureLevel必须为一个特性级别字符串

    如果满足要求用户代理选择返回一个适配器:

    1. 设置 adapter 为根据 § 4.2.2 适配器选择 中的规则和 options 的条件选择的 adapter,并遵循 § 4.2.1 适配器能力保证。根据定义初始化适配器的属性:

      1. 根据适配器的支持能力设置 adapter.[[limits]]adapter.[[features]]adapter.[[features]] 必须包含 "core-features-and-limits"

      2. 如果 adapter 满足回退适配器的条件,则将 adapter.[[fallback]] 设置为 true。否则,设置为 false

      3. adapter.[[xrCompatible]] 设置为 options.xrCompatible

    否则:

    1. adapternull

  2. contentTimeline 上执行后续步骤。

内容时序步骤:
  1. 如果 adapter 不为 null

    1. 解析(Resolve) promise,返回一个封装了 adapter 的新 GPUAdapter

  2. 否则,解析(Resolve) promise,返回 null

getPreferredCanvasFormat()

返回在本系统上显示 8 位深度、标准动态范围内容的最佳 GPUTextureFormat。只能返回 "rgba8unorm""bgra8unorm"

返回值可作为 format 传递给 configure() 方法,以确保相关 GPUCanvasContext 能高效显示其内容。

注意: 未显示到屏幕上的 canvas 使用此格式可能无益,也可能有益。

调用对象: GPU this。

返回: GPUTextureFormat

内容时序 步骤:

  1. 根据本系统 WebGPU canvas 显示的最佳格式,返回 "rgba8unorm""bgra8unorm"

GPU 有以下属性:

wgslLanguageFeatures 类型为 WGSLLanguageFeatures,只读

支持的 WGSL 语言扩展的名称。 支持的语言扩展会自动启用。

适配器 可能 随时被 过期。 当系统状态发生任何可能影响任何 requestAdapter() 调用结果的变化时,用户代理 应当 使所有 先前返回的 适配器 过期。例如:

注意: 用户代理可能会选择经常 过期 适配器,即使系统状态没有发生变化(例如在适配器创建后几秒或几分钟)。 这有助于混淆真实的系统状态变化,并让开发者更加意识到,在调用 requestAdapter() 之前,总是需要重新请求适配器再调用 requestDevice()。 如果应用遇到这种情况,标准的设备丢失恢复处理应当能够使其恢复。

无提示地请求一个 GPUAdapter
const gpuAdapter = await navigator.gpu.requestAdapter();

4.2.1. 适配器能力保证

任何由 GPUAdapter 通过 requestAdapter() 返回的对象都必须提供以下保证:

4.2.2. 适配器选择

GPURequestAdapterOptions 为用户代理提供指示,表明哪些配置适合应用程序。

dictionary GPURequestAdapterOptions {
    DOMString featureLevel = "core";
    GPUPowerPreference powerPreference;
    boolean forceFallbackAdapter = false;
    boolean xrCompatible = false;
};
enum GPUPowerPreference {
    "low-power",
    "high-performance",
};

GPURequestAdapterOptions 拥有如下成员:

featureLevel 类型为 DOMString,默认值为 "core"

适配器请求的“功能级别”。

允许的 功能级别字符串值为:

"core"

无影响。

"compatibility"

无影响。

注意: 该值为将来引入额外验证限制时保留。 目前应用不应使用此值。

powerPreference 类型为 GPUPowerPreference

可选,为用户代理提供一个提示,指示应从系统可用适配器中选择哪种类别的 适配器

该提示的值可能影响选择哪个适配器,但不得影响是否返回适配器。

注意: 该提示的主要用途是在多GPU系统上影响所用的GPU。 例如,某些笔记本电脑有低功耗集成GPU和高性能独立GPU。该提示也可能影响所选GPU的电源配置,以匹配请求的电源偏好。

注意: 根据具体硬件配置(如电池状态、外接显示器或可卸载GPU),即使设置了相同的电源偏好,用户代理可能依然选择不同的 适配器。 通常,在相同硬件配置和状态下,且 powerPreference 一致,用户代理通常会选择同一个适配器。

必须为以下值之一:

undefined(或未设置)

不给用户代理提供任何提示。

"low-power"

表示请求优先考虑节能而非性能。

注意: 如果内容渲染需求较低(如每秒只渲染一帧,仅绘制相对简单的几何体和着色器,或只用小型HTML canvas元素),一般应使用该选项。 如果内容允许,鼓励开发者使用此值,因为这可能大幅提升便携设备的电池寿命。

"high-performance"

表示请求优先考虑性能而非能耗。

注意: 选择此值时,开发者应注意,对于在所选适配器上创建的 设备,用户代理更可能强制设备丢失,以便切换到低功耗适配器节能。 只有在确有必要时,开发者才应指定此值,因为它可能会大幅降低便携设备的电池寿命。

forceFallbackAdapter 类型为 boolean,默认值为 false

若设为 true,表示只能返回 回退适配器。如果用户代理不支持 回退适配器,则 requestAdapter() 会返回 null

注意: 即使 forceFallbackAdapter 设为 falserequestAdapter() 仍可能返回 回退适配器(当没有其他合适的 适配器可用,或用户代理选择返回回退适配器时)。 希望避免应用在 回退适配器上运行的开发者应在请求 GPUDevice 前检查 info.isFallbackAdapter 属性。

xrCompatible 类型为 boolean,默认值为 false

若设为 true,表示必须返回用于 WebXR会话渲染的最佳 适配器。如果用户代理或系统不支持 WebXR会话,则适配器选择可忽略此值。

注意: 如果在请求适配器时 xrCompatible 未设为 true,则从该适配器创建的 GPUDevice 无法用于 WebXR会话 渲染。

请求 "high-performance" GPUAdapter
const gpuAdapter = await navigator.gpu.requestAdapter({
    powerPreference: 'high-performance'
});

4.3. GPUAdapter

GPUAdapter 封装了一个 适配器, 并描述其能力(特性限制)。

要获取一个 GPUAdapter, 请使用 requestAdapter()

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapter {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo info;

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

GPUAdapter 拥有以下不可变属性

features 类型为 GPUSupportedFeatures,只读

this.[[adapter]].[[features]] 中的值集合。

limits 类型为 GPUSupportedLimits,只读

this.[[adapter]].[[limits]] 中的限制。

info 类型为 GPUAdapterInfo,只读

关于该 GPUAdapter 所对应物理适配器的信息。

对于同一个 GPUAdapter, 暴露的 GPUAdapterInfo 值在时间上是恒定的。

每次返回的都是同一个对象。首次创建该对象的方法如下:

调用对象: GPUAdapter this.

返回: GPUAdapterInfo

内容时间线 步骤:

  1. this.[[adapter]] 返回一个新的适配器信息

[[adapter]],类型为 adapter,只读

GPUAdapter 所引用的 适配器

GPUAdapter 拥有以下方法:

requestDevice(descriptor)

从该 适配器 请求一个 设备

这是一次性操作:如果成功返回了设备,则适配器变为 "consumed"

调用对象: GPUAdapter this.

参数:

方法 GPUAdapter.requestDevice(descriptor) 的参数。
参数 类型 可为 null 可选 描述
descriptor GPUDeviceDescriptor 请求的 GPUDevice 的描述信息。

返回: Promise<GPUDevice>

内容时间线 步骤:

  1. contentTimeline 为当前 内容时间线

  2. promise新的 promise

  3. adapterthis.[[adapter]]

  4. this设备时间线 执行 初始化步骤

  5. 返回 promise

设备时间线 初始化步骤
  1. 如果下列任何要求未满足:

    则在 contentTimeline 上执行下列步骤并返回:

    内容时间线 步骤:
    1. 拒绝 promise,并给出 TypeError

    注意: 这与浏览器完全不识别某特性的情况下抛出的错误相同(在其 GPUFeatureName 定义里)。 这使得浏览器不支持某特性和某适配器不支持某特性时的行为一致。

  2. 以下步骤中的所有要求 必须 被满足。

    1. adapter.[[state]] 不得为 "consumed"

    2. 对于 descriptor.requiredLimits 中每个 keyvalue 对(且 value 不为 undefined):

      1. key 必须支持的限制的成员名。

      2. value 不得adapter.[[limits]][key] 更高。

      3. key类别对齐value 必须 是小于 232 的 2 的幂。

      注意: 即使 valueundefined,当 key 未被识别时,用户代理应考虑给出开发者可见的警告。

    若有未满足项,则在 contentTimeline 上执行下列步骤并返回:

    内容时间线 步骤:
    1. 拒绝 promise,并给出 OperationError

  3. 如果 adapter.[[state]]"expired" 或用户代理无法完成该请求:

    1. device 为新的 设备

    2. 丢失该设备(device, "unknown")。

    3. 断言 adapter.[[state]]"expired"

      注意: 当发生这种情况时,用户代理应考虑在大多数或所有情况下向开发者发出警告。应用应从 requestAdapter() 开始重新初始化逻辑。

    否则:

    1. device 为带有 descriptor 描述能力的新设备

    2. 使 adapter 过期

  4. contentTimeline 上执行后续步骤。

内容时间线 步骤:
  1. gpuDevice 为新的 GPUDevice 实例。

  2. 设置 gpuDevice.[[device]]device

  3. 设置 device.[[content device]]gpuDevice

  4. 设置 gpuDevice.labeldescriptor.label

  5. 解决 promise,返回 gpuDevice

    注意: 如果由于适配器无法完成请求而设备已丢失,则 device.lostpromise 解决前已经解决。

请求带有默认特性和限制的 GPUDevice
const gpuAdapter = await navigator.gpu.requestAdapter();
const gpuDevice = await gpuAdapter.requestDevice();

4.3.1. GPUDeviceDescriptor

GPUDeviceDescriptor 描述一个设备请求。

dictionary GPUDeviceDescriptor
     : GPUObjectDescriptorBase {
sequence<GPUFeatureName> requiredFeatures = [];
record<DOMString, (GPUSize64 or undefined)> requiredLimits = {};
GPUQueueDescriptor defaultQueue = {};
};

GPUDeviceDescriptor 拥有以下成员:

requiredFeatures 类型为 sequence<GPUFeatureName>,默认值为 []

指定设备请求所需的特性。 如果适配器无法提供这些特性,请求会失败。

API调用的校验只允许恰好指定的特性,不多也不少。

requiredLimits 类型为 record<DOMString, (GPUSize64 or undefined)>,默认值为 {}

指定设备请求所需的限制。 如果适配器无法提供这些限制,请求会失败。

每个非 undefined 值的 key 必须是支持的限制的成员名。

在最终设备上的 API 调用会根据该设备的实际限制(不是适配器的限制;见 § 3.6.2 限制)进行校验。

defaultQueue 类型为 GPUQueueDescriptor,默认值为 {}

默认 GPUQueue 的描述符。

当支持 "texture-compression-astc" 特性时,请求 GPUDevice
const gpuAdapter = await navigator.gpu.requestAdapter();

const requiredFeatures = [];
if (gpuAdapter.features.has('texture-compression-astc')) {
    requiredFeatures.push('texture-compression-astc')
}

const gpuDevice = await gpuAdapter.requestDevice({
    requiredFeatures
});
请求 GPUDevice 并指定更高的 maxColorAttachmentBytesPerSample 限制:
const gpuAdapter = await navigator.gpu.requestAdapter();

if (gpuAdapter.limits.maxColorAttachmentBytesPerSample < 64) {
    // 当所需限制不被支持时,采取措施,要么降级代码路径,要么提示用户其设备不满足最低要求。
}

// 请求更高的每样本最大颜色附件字节数限制。
const gpuDevice = await gpuAdapter.requestDevice({
    requiredLimits: { maxColorAttachmentBytesPerSample: 64 },
});
4.3.1.1. GPUFeatureName

每个 GPUFeatureName 都标识一组功能,如果可用,可使 WebGPU 支持本来无效的额外用法。

enum GPUFeatureName {
    "core-features-and-limits",
    "depth-clip-control",
    "depth32float-stencil8",
    "texture-compression-bc",
    "texture-compression-bc-sliced-3d",
    "texture-compression-etc2",
    "texture-compression-astc",
    "texture-compression-astc-sliced-3d",
    "timestamp-query",
    "indirect-first-instance",
    "shader-f16",
    "rg11b10ufloat-renderable",
    "bgra8unorm-storage",
    "float32-filterable",
    "float32-blendable",
    "clip-distances",
    "dual-source-blending",
    "subgroups",
    "texture-formats-tier1",
    "texture-formats-tier2",
};

4.4. GPUDevice

GPUDevice 封装了一个设备并暴露该设备的功能。

GPUDevice 是创建WebGPU 接口的顶层接口。

要获取一个 GPUDevice,请使用 requestDevice()

[Exposed=(Window, Worker), SecureContext]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo adapterInfo;

    [SameObject] readonly attribute GPUQueue queue;

    undefined destroy();

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});
    GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor);

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);
    Promise<GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor);
    Promise<GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

GPUDevice 拥有以下 不可变属性

features 类型为 GPUSupportedFeatures,只读

包含设备支持的 GPUFeatureName 的集合([[device]].[[features]])。

limits 类型为 GPUSupportedLimits,只读

该设备支持的限制([[device]].[[limits]])。

queue 类型为 GPUQueue,只读

此设备的主 GPUQueue

adapterInfo 类型为 GPUAdapterInfo,只读

创建此 设备的物理适配器的信息,该 GPUDevice 所指向的适配器。

对于同一个 GPUDevice,暴露的 GPUAdapterInfo 值在时间上是恒定的。

每次返回的都是同一个对象。首次创建该对象的方法如下:

调用对象: GPUDevice this

返回: GPUAdapterInfo

内容时间线 步骤:

  1. this.[[device]].[[adapter]] 返回一个新的适配器信息

[[device]]GPUDevice 所引用的 设备

GPUDevice 拥有以下方法:

destroy()

销毁该 设备,阻止进一步的操作。所有未完成的异步操作将失败。

注意: 多次销毁同一个设备是合法的。

调用对象: GPUDevice this

内容时间线 步骤:

  1. 对该设备的所有 GPUBuffer 调用 unmap()

  2. this设备时间线 上执行后续步骤。

  1. 丢失该设备(this.[[device]], "destroyed")。

注意: 由于无法在该设备上排队新的操作,实现可以立即中止所有未完成的异步操作并释放资源,包括刚刚解除映射的内存。

GPUDevice允许的缓冲区用法 包括:
GPUDevice允许的纹理用法 包括:

4.5. 示例

一个更健壮的请求 GPUAdapterGPUDevice 并进行错误处理的示例:
let gpuDevice = null;

async function initializeWebGPU() {
    // 检查用户代理是否支持 WebGPU。
    if (!('gpu' in navigator)) {
        console.error("用户代理不支持 WebGPU。");
        return false;
    }

    // 请求适配器。
    const gpuAdapter = await navigator.gpu.requestAdapter();

    // 如果找不到合适的适配器,requestAdapter 可能会 resolve 为 null。
    if (!gpuAdapter) {
        console.error('未找到 WebGPU 适配器。');
        return false;
    }

    // 请求设备。
    // 注意:如果给可选字典传递了无效参数,promise 会 reject。为避免 promise 被拒绝,
    // 总是应在调用 requestDevice() 前检查特性和限制是否被适配器支持。
    gpuDevice = await gpuAdapter.requestDevice();

    // requestDevice 永远不会返回 null,但如果出于某种原因无法满足有效的设备请求,
    // 它可能会返回一个已经丢失的设备。
    // 此外,设备在创建后可能随时丢失(如:浏览器资源管理、驱动更新),
    // 因此应始终优雅地处理设备丢失。
    gpuDevice.lost.then((info) => {
        console.error(`WebGPU 设备已丢失: ${info.message}`);

        gpuDevice = null;

        // 多数设备丢失的原因是临时性的,因此应用应在丢失后尝试获取新设备,
        // 除非丢失原因是应用主动销毁了设备。注意,所有用旧设备创建的 WebGPU 资源(缓冲区、纹理等)
        // 都需要用新设备重新创建。
        if (info.reason != 'destroyed') {
            initializeWebGPU();
        }
    });

    onWebGPUInitialized();

    return true;
}

function onWebGPUInitialized() {
    // 在这里创建 WebGPU 资源……
}

initializeWebGPU();

5. 缓冲区

5.1. GPUBuffer

GPUBuffer 表示一块可用于 GPU 操作的内存块。 数据以线性布局存储,这意味着分配的每个字节都可以通过其相对于 GPUBuffer 起始位置的偏移来寻址, 但具体操作可能有对齐限制。某些 GPUBuffers 可以被映射,使这块内存可以通过一个称为映射的 ArrayBuffer 访问。

GPUBuffer 通过 createBuffer() 创建。 缓冲区可以 mappedAtCreation

[Exposed=(Window, Worker), SecureContext]
interface GPUBuffer {
    readonly attribute GPUSize64Out size;
    readonly attribute GPUFlagsConstant usage;

    readonly attribute GPUBufferMapState mapState;

    Promise<undefined> mapAsync(GPUMapModeFlags mode, optional GPUSize64 offset = 0, optional GPUSize64 size);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined unmap();

    undefined destroy();
};
GPUBuffer includes GPUObjectBase;

enum GPUBufferMapState {
    "unmapped",
    "pending",
    "mapped",
};

GPUBuffer 拥有以下 不可变属性

size 类型为 GPUSize64Out,只读

GPUBuffer 分配的字节长度。

usage 类型为 GPUFlagsConstant,只读

GPUBuffer 的允许用法。

GPUBuffer 拥有以下 内容时间线属性

mapState 类型为 GPUBufferMapState,只读

缓冲区的当前 GPUBufferMapState

"unmapped"

缓冲区未被 this.getMappedRange() 映射使用。

"pending"

缓冲区已请求映射,但尚未完成。映射可能成功,也可能在 mapAsync() 校验时失败。

"mapped"

缓冲区已映射,可以通过 this.getMappedRange() 使用。

getter 步骤如下:

内容时间线 步骤:
  1. 如果 this.[[mapping]] 不为 null,返回 "mapped"

  2. 如果 this.[[pending_map]] 不为 null,返回 "pending"

  3. 返回 "unmapped"

[[pending_map]],类型为 Promise<void> 或 null,初始为 null

当前挂起的 mapAsync() 调用返回的 Promise

永远不会有多个挂起的映射,因为如果已有请求在进行中,mapAsync() 会立即拒绝。

[[mapping]],类型为 活动缓冲区映射null,初始为 null

仅当缓冲区当前被 getMappedRange() 映射使用时才设置。否则为 null(即使有 [[pending_map]])。

活动缓冲区映射 是具有以下字段的结构:

data,类型为 数据块

GPUBuffer 的映射数据。应用通过 ArrayBuffer 视图访问,由 getMappedRange() 返回并存储在 views 中。

mode,类型为 GPUMapModeFlags

映射的 GPUMapModeFlags,由相应 mapAsync()createBuffer() 调用指定。

range,类型为元组 [unsigned long long, unsigned long long]

GPUBuffer 被映射的范围。

views,类型为 list<ArrayBuffer>

通过 getMappedRange() 返回给应用的 ArrayBuffer。这些视图会在 unmap() 被调用时被追踪并分离。

要用 mode mode 和 range range 初始化活动缓冲区映射,执行以下 内容时间线 步骤:
  1. sizerange[1] - range[0]。

  2. data? CreateByteDataBlock(size)。

    注意:
    这可能抛出 RangeError。 为了保证一致和可预测性:
    • 对于 new ArrayBuffer() 在某一时刻能成功的任意 size,此分配 也应 同时成功。

    • 对于 new ArrayBuffer() 确定性 抛出 RangeError 的任意 size,此分配 也应 抛出。

  3. 返回具有如下内容的 活动缓冲区映射

缓冲区的映射与解除映射。
缓冲区映射失败。

GPUBuffer 拥有以下 设备时间线属性

[[internal state]]

缓冲区的当前内部状态:

"available"

缓冲区可用于队列操作(除非已失效)。

"unavailable"

缓冲区因已映射而不可用于队列操作。

"destroyed"

缓冲区因被 destroy() 而无法用于任何操作。

5.1.1. GPUBufferDescriptor

dictionary GPUBufferDescriptor
     : GPUObjectDescriptorBase {
required GPUSize64 size;
required GPUBufferUsageFlags usage;
boolean mappedAtCreation = false;
};

GPUBufferDescriptor 拥有以下成员:

size 类型为 GPUSize64

缓冲区的字节大小。

usage 类型为 GPUBufferUsageFlags

缓冲区允许的用法。

mappedAtCreation 类型为 boolean,默认值为 false

如果为 true,则缓冲区会以已映射状态创建,允许立即调用 getMappedRange()。 即使 usage 不包含 MAP_READMAP_WRITE,也可以设置 mappedAtCreationtrue。 这可用于设置缓冲区的初始数据。

即使最终缓冲区创建失败,也保证看起来映射范围可读写直到其被解除映射。

5.1.2. 缓冲区用法

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUBufferUsage {
    const GPUFlagsConstant MAP_READ      = 0x0001;
    const GPUFlagsConstant MAP_WRITE     = 0x0002;
    const GPUFlagsConstant COPY_SRC      = 0x0004;
    const GPUFlagsConstant COPY_DST      = 0x0008;
    const GPUFlagsConstant INDEX         = 0x0010;
    const GPUFlagsConstant VERTEX        = 0x0020;
    const GPUFlagsConstant UNIFORM       = 0x0040;
    const GPUFlagsConstant STORAGE       = 0x0080;
    const GPUFlagsConstant INDIRECT      = 0x0100;
    const GPUFlagsConstant QUERY_RESOLVE = 0x0200;
};

GPUBufferUsage 标志决定了 GPUBuffer 创建后可以如何使用:

MAP_READ

缓冲区可被映射用于读取。(例如:调用 mapAsync() 并传入 GPUMapMode.READ

只能与 COPY_DST 组合使用。

MAP_WRITE

缓冲区可被映射用于写入。(例如:调用 mapAsync() 并传入 GPUMapMode.WRITE

只能与 COPY_SRC 组合使用。

COPY_SRC

缓冲区可作为拷贝操作的源。(例如:作为 copyBufferToBuffer()copyBufferToTexture() 调用的 source 参数)

COPY_DST

缓冲区可作为拷贝或写入操作的目标。(例如:作为 copyBufferToBuffer()copyTextureToBuffer() 调用的 destination 参数,或作为 writeBuffer() 的目标。)

INDEX

缓冲区可作为索引缓冲区使用。(例如:传递给 setIndexBuffer()。)

VERTEX

缓冲区可作为顶点缓冲区使用。(例如:传递给 setVertexBuffer()。)

UNIFORM

缓冲区可用作 uniform 缓冲区。(例如:用于 GPUBufferBindingLayout 的绑定组条目,且 buffer.type"uniform"。)

STORAGE

缓冲区可用作 storage 缓冲区。(例如:用于 GPUBufferBindingLayout 的绑定组条目,且 buffer.type"storage""read-only-storage"。)

INDIRECT

缓冲区可用于存储间接命令参数。(例如:作为 indirectBuffer 参数传递给 drawIndirect()dispatchWorkgroupsIndirect() 调用。)

QUERY_RESOLVE

缓冲区可用于捕获查询结果。(例如:作为 destination 参数传递给 resolveQuerySet()。)

5.1.3. 缓冲区创建

createBuffer(descriptor)

创建一个 GPUBuffer

调用对象: GPUDevice this

参数:

GPUDevice.createBuffer(descriptor) 方法的参数。
参数 类型 可为 null 可选 描述
descriptor GPUBufferDescriptor 要创建的 GPUBuffer 描述信息。

返回: GPUBuffer

内容时间线 步骤:

  1. b! 创建新的 WebGPU 对象(this, GPUBuffer, descriptor)。

  2. 设置 b.sizedescriptor.size

  3. 设置 b.usagedescriptor.usage

  4. 如果 descriptor.mappedAtCreationtrue

    1. 如果 descriptor.size 不是 4 的倍数,抛出 RangeError

    2. 设置 b.[[mapping]]? 初始化活动缓冲区映射,mode 为 WRITE,range 为 [0, descriptor.size]

  5. this设备时间线 上执行 初始化步骤

  6. 返回 b

设备时间线 初始化步骤
  1. 如有以下任一要求未被满足, 生成校验错误使 b 失效并返回。

注意: 如果缓冲区创建失败,且 descriptor.mappedAtCreationfalse, 任何对 mapAsync() 的调用都会被拒绝,因此为支持映射而分配的任何资源都可以被回收。

  1. 如果 descriptor.mappedAtCreationtrue

    1. 设置 b.[[internal state]] 为 "unavailable"。

    否则:

    1. 设置 b.[[internal state]] 为 "available"。

  2. b 创建设备分配,并将每个字节置零。

    如果分配失败且无副作用,生成内存不足错误使 b 失效并返回。

创建一个 128 字节、可写入的 uniform 缓冲区:
const buffer = gpuDevice.createBuffer({
    size: 128,
    usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST
});

5.1.4. 缓冲区销毁

应用如果不再需要某个 GPUBuffer,可以在垃圾回收前调用 destroy() 主动放弃访问它。 销毁缓冲区还会解除映射,释放为映射分配的内存。

注意: 这样允许用户代理在所有先前提交的与该 GPUBuffer 相关的操作完成后及时回收其 GPU 内存。

GPUBuffer 拥有以下方法:

destroy()

销毁该 GPUBuffer

注意: 多次销毁同一个缓冲区是合法的。

调用对象: GPUBuffer this

返回值: undefined

内容时间线 步骤:

  1. 调用 this.unmap()

  2. this.[[device]]设备时间线 上执行后续步骤。

设备时间线 步骤:
  1. 设置 this.[[internal state]] 为 "destroyed"。

注意: 由于无法继续用该缓冲区排队新操作,实现可以立即释放资源分配,包括刚刚解除映射的内存。

5.2. 缓冲区映射

应用可以请求映射 GPUBuffer,从而通过代表该 GPUBuffer 分配部分的 ArrayBuffer 访问其内容。 映射 GPUBuffer 的请求是通过 mapAsync() 异步进行的,以便用户代理能确保 GPU 在应用访问内容前已完成对该 GPUBuffer 的使用。 被映射的 GPUBuffer 不能被 GPU 使用,必须通过 unmap() 解除映射后才能再次用于 队列时间线的工作提交。

一旦 GPUBuffer 被映射,应用可通过 getMappedRange() 同步访问其内容范围。 返回的 ArrayBuffer 只能被 unmap()(直接或通过 GPUBuffer.destroy()GPUDevice.destroy())分离,不能被转移。 任何其它操作试图这样做都会抛出 TypeError

typedef [EnforceRange] unsigned long GPUMapModeFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUMapMode {
    const GPUFlagsConstant READ  = 0x0001;
    const GPUFlagsConstant WRITE = 0x0002;
};

GPUMapMode 标志决定了调用 mapAsync()GPUBuffer 被如何映射:

READ

只对创建时带有 MAP_READ 用法的缓冲区有效。

映射后,调用 getMappedRange() 将返回包含缓冲区当前值的 ArrayBuffer。对返回的 ArrayBuffer 的更改在 unmap() 后会被丢弃。

WRITE

只对创建时带有 MAP_WRITE 用法的缓冲区有效。

映射后,调用 getMappedRange() 将返回包含缓冲区当前值的 ArrayBuffer。对返回的 ArrayBuffer 的更改会在 unmap() 后写入缓冲区。

注意: 由于 MAP_WRITE 用法只能与 COPY_SRC 组合,写映射永远不会返回 GPU 产生的数据,返回的 ArrayBuffer 只包含初始化为零的数据或网页之前映射写入的数据。

GPUBuffer 拥有以下方法:

mapAsync(mode, offset, size)

映射 GPUBuffer 的指定范围,并在缓冲区内容可以通过 getMappedRange() 访问时 resolve 返回的 Promise

返回的 Promise 表示缓冲区已被映射,不保证其它 内容时间线可见操作完成,特别是不保证其它 Promise(如 onSubmittedWorkDone() 或其它 mapAsync())已 resolve。

PromiseonSubmittedWorkDone() 返回则保证在该队列上之前的所有 mapAsync() 已完成。

调用对象: GPUBuffer this

参数:

GPUBuffer.mapAsync(mode, offset, size) 方法的参数。
参数 类型 可为 null 可选 描述
mode GPUMapModeFlags 缓冲区应以读或写方式映射。
offset GPUSize64 要映射的范围起始字节偏移。
size GPUSize64 要映射的范围字节大小。

返回值: Promise<undefined>

内容时间线步骤:

  1. contentTimeline 为当前 内容时间线

  2. this.mapState 不为 "unmapped"

    1. this.[[device]]设备时间线 上执行 early-reject steps

    2. 返回 被拒绝的 promise,错误为 OperationError

  3. p 为新的 Promise

  4. 设置 this.[[pending_map]] = p

  5. this.[[device]]设备时间线 上执行 validation steps

  6. 返回 p

设备时间线 early-reject steps
  1. 生成校验错误

  2. 返回。

设备时间线 validation steps
  1. 如果 sizeundefined

    1. rangeSize = max(0, this.size - offset)。

    否则:

    1. rangeSize = size

  2. 如果下述任一条件不满足:

    1. 设置 deviceLosttrue

    2. contentTimeline 上执行 map failure steps

    3. 返回。

  3. 如果下述任一条件不满足:

    否则:

    1. 设置 deviceLostfalse

    2. contentTimeline 上执行 map failure steps

    3. 生成校验错误

    4. 返回。

  4. this.[[internal state]] 设为 "unavailable"。

    注意: 缓冲区被映射后,在解除映射前其内容不会改变。

  5. 当下述任一事件发生(以先发生者为准),或已发生时:

    然后在 this.[[device]]设备时间线 上执行后续步骤。

设备时间线 步骤:
  1. this.[[device]]失效,则 deviceLost 设为 true,否则为 false

    注意: 设备可能在前后步骤之间丢失。

  2. 如果 deviceLost

    1. contentTimeline 上执行 map failure steps

    否则:

    1. internalStateAtCompletion = this.[[internal state]]

      注意: 只有当此时缓冲区因 unmap() 变为 "available" 时,[[pending_map]]p,后续 mapping 步骤不会成功。

    2. dataForMappedRegionthisoffset 开始,长度为 rangeSize 字节的内容。

    3. contentTimeline 上执行 map success steps

内容时间线 map success steps
  1. this.[[pending_map]]p

    注意: 映射已被 unmap() 取消。

    1. 断言 p 已被拒绝。

    2. 返回。

  2. 断言 p 仍为 pending。

  3. 断言 internalStateAtCompletion 为 "unavailable"。

  4. mapping = 初始化活动缓冲区映射,mode 为 mode,range 为 [offset, offset + rangeSize]

    若分配失败:

    1. this.[[pending_map]] = null,并 拒绝 p,错误为 RangeError

    2. 返回。

  5. mapping.data 设为 dataForMappedRegion

  6. this.[[mapping]] = mapping

  7. this.[[pending_map]] = null,并 resolve p

内容时间线 map failure steps
  1. 如果 this.[[pending_map]]p

    注意: 映射已被 unmap() 取消。

    1. 断言 p 已被拒绝。

    2. 返回。

  2. 断言 p 仍为 pending。

  3. this.[[pending_map]] = null

  4. 如果 deviceLost

    1. 拒绝 p 并抛出 AbortError

      注意: 这和用 unmap() 取消 map 时抛出的错误类型一致。

    否则:

    1. 拒绝 p 并抛出 OperationError

getMappedRange(offset, size)

返回包含指定范围 GPUBuffer 内容的 ArrayBuffer

调用对象: GPUBuffer this

参数:

GPUBuffer.getMappedRange(offset, size) 方法的参数。
参数 类型 可为 null 可选 描述
offset GPUSize64 要返回内容的起始字节偏移。
size GPUSize64 要返回的 ArrayBuffer 字节长度。

返回值: ArrayBuffer

内容时间线 步骤:

  1. 如果 size 未指定:

    1. rangeSize = max(0, this.size - offset)。

    否则,rangeSize = size

  2. 如以下任一条件不满足,则抛出 OperationError 并返回:

    注意:GPUBuffer,即使其为 mappedAtCreation无效,只要 内容时间线 未知其无效,依然可以获取映射范围。

  3. data = this.[[mapping]].data

  4. view = 创建 ArrayBuffer,大小为 rangeSize,指针可变地引用 data 从 (offset - [[mapping]].range[0]) 开始的内容。

    注意: 这里不会抛出 RangeError,因为 data 已在 mapAsync()createBuffer() 时分配。

  5. 设置 view.[[ArrayBufferDetachKey]] 为 "WebGPUBufferMapping"。

    注意: 这样如果尝试用除 unmap() 外的方法分离该 ArrayBuffer,会抛出 TypeError

  6. 追加 viewthis.[[mapping]].views

  7. 返回 view

注意:getMappedRange() 调用未先检查映射状态(如等待 mapAsync() resolve、查询 mapState"mapped" 或等待 onSubmittedWorkDone()),用户代理应考虑发出开发警告。

unmap()

解除 GPUBuffer 的映射范围,使其内容重新可被 GPU 使用。

调用对象: GPUBuffer this

返回值: undefined

内容时间线 步骤:

  1. 如果 this.[[pending_map]] 不为 null

    1. 拒绝 this.[[pending_map]],错误为 AbortError

    2. this.[[pending_map]] = null

  2. 如果 this.[[mapping]]null

    1. 返回。

  3. 对于 this.[[mapping]].views 中的每个 ArrayBuffer ab

    1. 执行 DetachArrayBuffer(ab, "WebGPUBufferMapping")。

  4. bufferUpdate = null

  5. 如果 this.[[mapping]].mode 包含 WRITE

    1. bufferUpdate = { data: this.[[mapping]].data, offset: this.[[mapping]].range[0] }。

    注意: 如果缓冲区不是以 WRITE 模式映射,解除映射时应用对映射范围 ArrayBuffer 的本地修改会被丢弃,不会影响后续映射内容。

  6. this.[[mapping]] = null

  7. this.[[device]]设备时间线 上执行后续步骤。

设备时间线 步骤:
  1. 如下条件任一不满足则返回:

  2. 断言 this.[[internal state]] 为 "unavailable"。

  3. 如果 bufferUpdate 不为 null

    1. this.[[device]].queue队列时间线 上执行:

      队列时间线 步骤:
      1. bufferUpdate.data 更新 thisbufferUpdate.offset 开始的内容。

  4. this.[[internal state]] 设为 "available"。

6. 纹理与纹理视图

6.1. GPUTexture

纹理(texture)1d2d3d 的数据数组组成,每个元素可包含多个值用于表示如颜色等信息。纹理的读写方式取决于创建时使用的 GPUTextureUsage。 例如,纹理可被采样、读写于渲染与计算管线着色器,也可由渲染通道输出写入。纹理在 GPU 内部通常以多维访问优化的布局存储,而非线性存储。

一个 纹理 由一个或多个 纹理子资源(texture subresource) 组成,每个子资源由 mipmap 级别 唯一标识, 对于 2d 纹理,还包括 数组层aspect(分量)

纹理子资源 是一种 子资源:每个子资源可在单一 用法范围 内有不同 内部用途

每个 mipmap 级别 的子资源 在每个空间维度上的尺寸大约是上一级资源的一半 (见 mipmap 级别特定逻辑纹理范围)。 0 级子资源的尺寸等于纹理本身的尺寸,更小的级别通常用于存储同一图像的低分辨率版本。 GPUSampler 与 WGSL 提供了选择与插值 细节级别 的能力,可显式或自动进行。

"2d" 纹理可以是 数组层(array layer) 的集合。 每一层的子资源尺寸都相同。非 2d 纹理的所有子资源的数组层索引都为 0。

每个子资源有一个 aspect(分量)。 彩色纹理只有一个分量:color深度或模板格式 的纹理可能有多个分量: depth 分量, stencil 分量,或两者皆有, 可被特殊用法所使用,如 depthStencilAttachment"depth" 绑定。

"3d" 纹理可以有多个 切片(slice),每个切片是该纹理在特定 z 值下的二维图像。 切片不是独立的子资源。

[Exposed=(Window, Worker), SecureContext]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    undefined destroy();

    readonly attribute GPUIntegerCoordinateOut width;
    readonly attribute GPUIntegerCoordinateOut height;
    readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers;
    readonly attribute GPUIntegerCoordinateOut mipLevelCount;
    readonly attribute GPUSize32Out sampleCount;
    readonly attribute GPUTextureDimension dimension;
    readonly attribute GPUTextureFormat format;
    readonly attribute GPUFlagsConstant usage;
};
GPUTexture includes GPUObjectBase;

GPUTexture 拥有如下不可变属性

width类型为 GPUIntegerCoordinateOut,只读

GPUTexture 的宽度。

height类型为 GPUIntegerCoordinateOut,只读

GPUTexture 的高度。

depthOrArrayLayers类型为 GPUIntegerCoordinateOut,只读

GPUTexture 的深度或层数。

mipLevelCount类型为 GPUIntegerCoordinateOut,只读

GPUTexture 的 mip 级别数量。

sampleCount类型为 GPUSize32Out,只读

GPUTexture 的采样数。

dimension类型为 GPUTextureDimension,只读

每个子资源的 texel 集的维度。

format类型为 GPUTextureFormat,只读

GPUTexture 的格式。

usage类型为 GPUFlagsConstant,只读

GPUTexture 允许的用途。

[[viewFormats]],类型为 sequence<GPUTextureFormat>

为此 GPUTexture 创建视图时可用作 GPUTextureViewDescriptor.formatGPUTextureFormat 集合。

GPUTexture 拥有如下设备时间线属性

[[destroyed]],类型为 boolean, 初始值为 false

如果纹理被销毁,它将无法再被用于任何操作,其底层内存可以被释放。

compute render extent(baseSize, mipLevel)

参数:

返回: GPUExtent3DDict

设备时间线步骤:

  1. extent 为新的 GPUExtent3DDict 对象。

  2. 设置 extent.width 为 max(1, baseSize.widthmipLevel)。

  3. 设置 extent.height 为 max(1, baseSize.heightmipLevel)。

  4. 设置 extent.depthOrArrayLayers 为 1。

  5. 返回 extent

逻辑 mip 级别特定纹理范围(logical miplevel-specific texture extent) 指的是某个 纹理 在特定 mipLevel 下的 texel 尺寸。 其计算过程如下:

逻辑 mip 级别特定纹理范围 (descriptor, mipLevel)

参数:

返回: GPUExtent3DDict

  1. extent 为新的 GPUExtent3DDict 对象。

  2. 如果 descriptor.dimension 为:

    "1d"
    "2d"
    "3d"
  3. 返回 extent

物理 mip 级别特定纹理范围(physical miplevel-specific texture extent) 指的是某个 纹理 在特定 mipLevel 下的 texel 尺寸,并包含为形成完整texel block而可能补齐的额外填充。其计算过程如下:

物理 mip 级别特定纹理范围 (descriptor, mipLevel)

参数:

返回: GPUExtent3DDict

  1. extent 为新的 GPUExtent3DDict 对象。

  2. logicalExtent = 逻辑 mip 级别特定纹理范围(descriptor, mipLevel)。

  3. 如果 descriptor.dimension 为:

    "1d"
    "2d"
    "3d"
  4. 返回 extent

6.1.1. GPUTextureDescriptor

dictionary GPUTextureDescriptor
         : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
    sequence<GPUTextureFormat> viewFormats = [];
};

GPUTextureDescriptor 拥有如下成员:

size类型为 GPUExtent3D

纹理的宽度、高度和深度或层数。

mipLevelCount类型为 GPUIntegerCoordinate,默认值为1

该纹理包含的 mip 级别数量。

sampleCount类型为 GPUSize32,默认值为1

纹理的采样数。当 sampleCount > 1 时,表示为多重采样纹理。

dimension类型为 GPUTextureDimension,默认值为"2d"

指定纹理是一维、二维数组层,还是三维纹理。

format类型为 GPUTextureFormat

纹理的格式。

usage类型为 GPUTextureUsageFlags

纹理允许的用途。

viewFormats类型为 sequence<GPUTextureFormat>,默认值为[]

指定在该纹理上调用 createView() 时,允许作为 format 的格式(除了纹理实际的 format 外)。

注意:
添加格式到该列表可能会带来显著的性能影响,因此应避免不必要地添加格式。

实际的性能影响高度依赖于目标系统;开发者需在多种系统上测试应用以了解具体影响。例如,在某些系统上,formatviewFormats 包含 "rgba8unorm-srgb" 时, 性能可能不如仅包含 "rgba8unorm" 的纹理。其他格式和格式组合在不同系统上也有类似注意事项。

该列表的格式必须与纹理格式 兼容

两个 GPUTextureFormatformatviewFormat,当且仅当满足以下条件时,纹理视图格式兼容(texture view format compatible)
  • format 等于 viewFormat,或

  • formatviewFormat 唯一区别在于是否为 srgb 格式(后缀为 -srgb)。

enum GPUTextureDimension {
    "1d",
    "2d",
    "3d",
};
"1d"

指定仅有宽度的一维纹理。"1d" 纹理不能有 mipmap,不能多重采样,不能使用压缩或深度/模板格式,也不能作为渲染目标。

"2d"

指定具有宽度和高度,并可具有层的二维纹理。

"3d"

指定具有宽度、高度和深度的三维纹理。"3d" 纹理不能多重采样,且其格式必须支持 3d 纹理(所有纯色格式和部分打包/压缩格式)。

typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUTextureUsage {
    const GPUFlagsConstant COPY_SRC          = 0x01;
    const GPUFlagsConstant COPY_DST          = 0x02;
    const GPUFlagsConstant TEXTURE_BINDING   = 0x04;
    const GPUFlagsConstant STORAGE_BINDING   = 0x08;
    const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;
};

GPUTextureUsage 标志决定 GPUTexture 创建后可如何使用:

COPY_SRC

该纹理可作为拷贝操作的源。(例如:作为 source 参数传递给 copyTextureToTexture()copyTextureToBuffer() 调用。)

COPY_DST

该纹理可作为拷贝或写入操作的目标。(例如:作为 destination 参数传递给 copyTextureToTexture()copyBufferToTexture() 调用,或作为 writeTexture() 的目标。)

TEXTURE_BINDING

该纹理可在着色器中作为采样纹理绑定(例如:作为 GPUTextureBindingLayout 的绑定组项)。

STORAGE_BINDING

该纹理可在着色器中作为存储纹理绑定(例如:作为 GPUStorageTextureBindingLayout 的绑定组项)。

RENDER_ATTACHMENT

该纹理可作为渲染通道中的颜色或深度/模板附件使用。(例如:作为 GPURenderPassColorAttachment.viewGPURenderPassDepthStencilAttachment.view。)

最大 mipLevel 数量(dimension, size)

参数:

  1. 计算最大维度值 m

  2. 返回 floor(log2(m)) + 1。

6.1.3. 纹理创建

createTexture(descriptor)

创建一个 GPUTexture

调用对象: GPUDevice this.

参数:

参数列表: GPUDevice.createTexture(descriptor)
参数 类型 可为 null 可选 描述
descriptor GPUTextureDescriptor 描述要创建的 GPUTexture

返回值: GPUTexture

内容时间线步骤:

  1. ? 校验 GPUExtent3D 形状(descriptor.size)。

  2. ? 校验纹理格式所需特性 (descriptor.format, this.[[device]])。

  3. ? 校验纹理格式所需特性 (对 descriptor.viewFormats 的每一项, this.[[device]])。

  4. t = ! 创建新的 WebGPU 对象(this, GPUTexture, descriptor)。

  5. t.width = descriptor.size.width

  6. t.height = descriptor.size.height

  7. t.depthOrArrayLayers = descriptor.size.depthOrArrayLayers

  8. t.mipLevelCount = descriptor.mipLevelCount

  9. t.sampleCount = descriptor.sampleCount

  10. t.dimension = descriptor.dimension

  11. t.format = descriptor.format

  12. t.usage = descriptor.usage

  13. this设备时间线上执行初始化步骤

  14. 返回 t

设备时间线 初始化步骤
  1. 如以下任一条件不满足,产生校验错误置为无效并返回。

  2. t.[[viewFormats]] = descriptor.viewFormats

  3. t 创建设备分配,使每个块具有与全零比特表示的块等价的 texel 表示。

    如分配失败且无副作用,产生内存不足错误置为无效并返回。

validating GPUTextureDescriptor(this, descriptor):

参数:

设备时间线步骤:

  1. limits = this.[[limits]]

  2. 仅当所有如下要求都满足时返回 true,否则返回 false

创建一个 16x16、RGBA 格式、二维、单数组层、单 mip 级别的纹理:
const texture = gpuDevice.createTexture({
    size: { width: 16, height: 16 },
    format: 'rgba8unorm',
    usage: GPUTextureUsage.TEXTURE_BINDING,
});

6.1.4. 纹理销毁

当应用不再需要某个 GPUTexture 时,可以通过调用 destroy() 主动失去对此对象的访问权,而无需等待垃圾回收。

注意: 这样允许用户代理在该 GPUTexture 相关的所有已提交操作完成后,回收其占用的 GPU 内存。

GPUTexture 拥有如下方法:

destroy()

销毁该 GPUTexture

调用对象: GPUTexture this

返回值: undefined

内容时间线步骤:

  1. 设备时间线上执行如下步骤。

设备时间线步骤:
  1. this.[[destroyed]] 设为 true。

6.2. GPUTextureView

GPUTextureView 是对某个 纹理子资源 子集的视图,由特定 GPUTexture 定义。

[Exposed=(Window, Worker), SecureContext]
interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

GPUTextureView 拥有如下不可变属性

[[texture]],只读

该视图所属的 GPUTexture

[[descriptor]],只读

描述此纹理视图的 GPUTextureViewDescriptor

所有 GPUTextureViewDescriptor 的可选字段都有定义。

[[renderExtent]],只读

对于可渲染视图,这是渲染时实际使用的 GPUExtent3DDict

注意:该范围取决于 baseMipLevel

纹理视图 view子资源(subresources)集合, 设 [[descriptor]]desc, 是 view.[[texture]] 的子资源子集,其中每个子资源 s 满足:

只有当两个 GPUTextureView 的子资源集合有交集时,它们才 视图别名(texture-view-aliasing)

6.2.1. 纹理视图创建

dictionary GPUTextureViewDescriptor
         : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureUsageFlags usage = 0;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount;
};

GPUTextureViewDescriptor 拥有如下成员:

format类型为 GPUTextureFormat

纹理视图的格式。必须是纹理的 format 或创建时指定的 viewFormats 之一。

dimension类型为 GPUTextureViewDimension

视图的维度。

usage类型为 GPUTextureUsageFlags,默认值为0

纹理视图允许的 usage。必须是纹理 usage 标志的子集。若为 0,则默认为纹理的全部 usage 标志。

注意:如果视图的 format 不支持纹理的全部 usage,则默认值会失败,此时必须显式指定视图的 usage

aspect类型为 GPUTextureAspect,默认值为"all"

视图可访问的纹理 aspect

baseMipLevel类型为 GPUIntegerCoordinate,默认值为0

该视图可访问的第一个(最详细) mipmap 级别。

mipLevelCount类型为 GPUIntegerCoordinate

baseMipLevel 开始,该视图可访问的 mipmap 级别数量。

baseArrayLayer类型为 GPUIntegerCoordinate,默认值为0

该视图可访问的第一个数组层索引。

arrayLayerCount类型为 GPUIntegerCoordinate

baseArrayLayer 开始,该视图可访问的数组层数量。

enum GPUTextureViewDimension {
"1d",
"2d",
"2d-array",
"cube",
"cube-array",
"3d",
};
"1d"

该视图作为一维图像。

对应的 WGSL 类型:

  • texture_1d

  • texture_storage_1d

"2d"

该视图作为单一的二维图像。

对应的 WGSL 类型:

  • texture_2d

  • texture_storage_2d

  • texture_multisampled_2d

  • texture_depth_2d

  • texture_depth_multisampled_2d

"2d-array"

该视图作为二维图像数组。

对应的 WGSL 类型:

  • texture_2d_array

  • texture_storage_2d_array

  • texture_depth_2d_array

"cube"

该视图作为立方体贴图。

该视图有 6 个数组层,分别对应立方体的 6 个面,顺序为 [+X, -X, +Y, -Y, +Z, -Z],朝向如下:

立方体贴图各面。+U/+V 轴表示各面纹理坐标,也即每个面的 texel copy 内存布局。

注意:从内部看,这是一个左手坐标系,+X 右,+Y 上,+Z 前。

采样时会在立方体各面间无缝进行。

对应的 WGSL 类型:

  • texture_cube

  • texture_depth_cube

"cube-array"

该视图作为 n 个立方体贴图的打包数组,每个立方体 6 个数组层,总共 6n 层。

对应的 WGSL 类型:

  • texture_cube_array

  • texture_depth_cube_array

"3d"

该视图作为三维图像。

对应的 WGSL 类型:

  • texture_3d

  • texture_storage_3d

每个 GPUTextureAspect 值对应一组 aspectaspect 集合如下定义:

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only",
};
"all"

纹理格式的所有可用 aspect 都可被该视图访问。对于颜色格式,可访问 color aspect。对于深度-模板复合格式,可访问 depth 和 stencil 两个 aspect。只有单一 aspect 的深度或模板格式,仅可访问该 aspect。

aspect 集合为 [color, depth, stencil]。

"stencil-only"

仅可访问深度或模板格式的 stencil aspect。

aspect 集合为 [stencil]。

"depth-only"

仅可访问深度或模板格式的 depth aspect。

aspect 集合为 [depth]。

createView(descriptor)

创建一个 GPUTextureView

注意:
默认情况下,createView() 会创建一个可以表示整个纹理的视图。例如,在一个具有多个图层的 "2d" 纹理上调用 createView() 且未指定 dimension 时,将会创建一个 "2d-array" GPUTextureView, 即使指定了 arrayLayerCount 为 1。

对于那些在开发时无法确定图层数的来源创建的纹理,建议在调用 createView() 时显式提供 dimension ,以确保着色器兼容性。

调用对象: GPUTexture this

参数:

参数列表: GPUTexture.createView(descriptor)
参数 类型 可为 null 可选 描述
descriptor GPUTextureViewDescriptor 要创建的 GPUTextureView 的描述。

返回值: view,类型为 GPUTextureView

内容时间线步骤:

  1. ? 校验纹理格式所需特性(descriptor.format, this.[[device]])。

  2. view = ! 创建新的 WebGPU 对象(this, GPUTextureView, descriptor)。

  3. this设备时间线上执行初始化步骤

  4. 返回 view

设备时间线 初始化步骤
  1. descriptor 设为对 thisdescriptor 执行 解析 GPUTextureViewDescriptor 默认值 的结果。

  2. 如下条件有任一不满足,产生校验错误置为无效并返回 view

  3. view 为新的 GPUTextureView对象。

  4. view.[[texture]] = this

  5. view.[[descriptor]] = descriptor

  6. descriptor.usage 包含 RENDER_ATTACHMENT

    1. renderExtent = 计算渲染范围([this.width, this.height, this.depthOrArrayLayers], descriptor.baseMipLevel)。

    2. view.[[renderExtent]] = renderExtent

当为 GPUTextureView textureGPUTextureViewDescriptor descriptor 执行 解析 GPUTextureViewDescriptor 默认值 时,执行如下设备时间线步骤:
  1. resolveddescriptor 的副本。

  2. 如果 resolved.format提供

    1. format = 解析 GPUTextureAspect(format, descriptor.aspect)。

    2. 如果 formatnull

      否则:

      • 设置 resolved.format = format

  3. 如果 resolved.mipLevelCount提供: 设置 resolved.mipLevelCount = texture.mipLevelCountresolved.baseMipLevel

  4. 如果 resolved.dimension提供,且 texture.dimension 为:

    "1d"

    设置 resolved.dimension = "1d"

    "2d"

    array layer count 为 1:

    否则:

    "3d"

    设置 resolved.dimension = "3d"

  5. 如果 resolved.arrayLayerCount提供,且 resolved.dimension 为:

    "1d""2d"、或"3d"

    设置 resolved.arrayLayerCount = 1

    "cube"

    设置 resolved.arrayLayerCount = 6

    "2d-array""cube-array"

    设置 resolved.arrayLayerCount = array layer counttexture) − resolved.baseArrayLayer

  6. 如果 resolved.usage0,设置 resolved.usage = texture.usage

  7. 返回 resolved

要确定 GPUTexture texturearray layer count,执行如下步骤:
  1. texture.dimension 为:

    "1d""3d"

    返回 1

    "2d"

    返回 texture.depthOrArrayLayers

6.3. 纹理格式

格式名称指定了分量顺序、每个分量的位数和分量的数据类型。

如果格式带有 -srgb 后缀,则在着色器中读取和写入颜色值时会应用 sRGB 的 gamma 到线性及其反向转换。压缩纹理格式由 特性 提供。命名应遵循此约定,以纹理名称为前缀,例如 etc2-rgba8unorm

texel block(像素块)是像素格式 GPUTextureFormat 下纹理的单一可寻址元素,在区块压缩格式下为单一压缩块。

texel block width(像素块宽度)texel block height(像素块高度)指定了一个像素块的尺寸。

texel block copy footprint指的是某个GPUTextureFormat中某aspect像素块拷贝时所占字节数(如适用)。

注意: 像素块内存开销是指存储一个GPUTextureFormat 像素块所需的字节数。不是所有格式都完全定义此值。该值仅作参考,非规范性内容。

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16unorm",
    "r16snorm",
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16unorm",
    "rg16snorm",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb9e5ufloat",
    "rgb10a2uint",
    "rgb10a2unorm",
    "rg11b10ufloat",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16unorm",
    "rgba16snorm",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth/stencil formats
    "stencil8",
    "depth16unorm",
    "depth24plus",
    "depth24plus-stencil8",
    "depth32float",

    // "depth32float-stencil8" feature
    "depth32float-stencil8",

    // BC compressed formats usable if "texture-compression-bc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "bc1-rgba-unorm",
    "bc1-rgba-unorm-srgb",
    "bc2-rgba-unorm",
    "bc2-rgba-unorm-srgb",
    "bc3-rgba-unorm",
    "bc3-rgba-unorm-srgb",
    "bc4-r-unorm",
    "bc4-r-snorm",
    "bc5-rg-unorm",
    "bc5-rg-snorm",
    "bc6h-rgb-ufloat",
    "bc6h-rgb-float",
    "bc7-rgba-unorm",
    "bc7-rgba-unorm-srgb",

    // ETC2 compressed formats usable if "texture-compression-etc2" is both
    // supported by the device/user agent and enabled in requestDevice.
    "etc2-rgb8unorm",
    "etc2-rgb8unorm-srgb",
    "etc2-rgb8a1unorm",
    "etc2-rgb8a1unorm-srgb",
    "etc2-rgba8unorm",
    "etc2-rgba8unorm-srgb",
    "eac-r11unorm",
    "eac-r11snorm",
    "eac-rg11unorm",
    "eac-rg11snorm",

    // ASTC compressed formats usable if "texture-compression-astc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "astc-4x4-unorm",
    "astc-4x4-unorm-srgb",
    "astc-5x4-unorm",
    "astc-5x4-unorm-srgb",
    "astc-5x5-unorm",
    "astc-5x5-unorm-srgb",
    "astc-6x5-unorm",
    "astc-6x5-unorm-srgb",
    "astc-6x6-unorm",
    "astc-6x6-unorm-srgb",
    "astc-8x5-unorm",
    "astc-8x5-unorm-srgb",
    "astc-8x6-unorm",
    "astc-8x6-unorm-srgb",
    "astc-8x8-unorm",
    "astc-8x8-unorm-srgb",
    "astc-10x5-unorm",
    "astc-10x5-unorm-srgb",
    "astc-10x6-unorm",
    "astc-10x6-unorm-srgb",
    "astc-10x8-unorm",
    "astc-10x8-unorm-srgb",
    "astc-10x10-unorm",
    "astc-10x10-unorm-srgb",
    "astc-12x10-unorm",
    "astc-12x10-unorm-srgb",
    "astc-12x12-unorm",
    "astc-12x12-unorm-srgb",
};

"depth24plus""depth24plus-stencil8" 格式的 depth 分量可以实现为 24 位深度值,也可以实现为 "depth32float" 值。

stencil8 格式可以实现为真正的 "stencil8",也可以实现为 "depth24stencil8"(此时 depth aspect 是隐藏且不可访问的)。

注意:
尽管 depth32float 通道的精度在可表示区间 (0.0 到 1.0) 内严格高于 24 位深度通道,但可表示的值集合并非完全包含关系。

格式为 可渲染(renderable)格式,若其为 颜色可渲染格式,或为深度或模板格式。 若某格式在 § 26.1.1 普通颜色格式 中标记了 RENDER_ATTACHMENT 能力,则为颜色可渲染格式。其它格式不是颜色可渲染格式。所有深度或模板格式均为可渲染格式。

可渲染格式若可与渲染管线混合(blending)使用,则也是 可混合(blendable)格式。详见 § 26.1 纹理格式能力

格式为 可过滤(filterable)格式,若其支持 GPUTextureSampleType"float"(不只是 "unfilterable-float"),即该格式可用于 "filtering" 类型 GPUSampler。详见 § 26.1 纹理格式能力

解析 GPUTextureAspect(format, aspect)

参数:

返回值: GPUTextureFormatnull

  1. aspect 为:

    "all"

    返回 format

    "depth-only"
    "stencil-only"

    format 为 depth-stencil-format: 返回 aspect-specific format,详见 § 26.1.2 深度/模板格式;如该 format 不含此 aspect,返回 null

  2. 返回 null

某些纹理格式的使用需要在 GPUDevice 上启用特性。由于新格式可添加到规范中,所以实现可能不知道这些枚举值。为保证一致性,尝试在未启用相关特性的设备上使用需要特性的格式时会抛出异常,这与实现本身不识别该格式时行为一致。

§ 26.1 纹理格式能力,了解哪些 GPUTextureFormat 需要特性支持。

校验纹理格式所需特性(GPUTextureFormat format, 逻辑device device)
  1. format 需要特性且 device.[[features]]包含该特性:

    1. 抛出 TypeError

6.4. GPUExternalTexture

GPUExternalTexture 是一个可采样的二维纹理,用于封装外部视频帧。它是不可变快照;其内容不会随时间变化,无论是来自 WebGPU 内部(仅可采样)还是外部(如视频帧推进)。

GPUExternalTexture 可通过 externalTexture 绑定组布局成员作为绑定项绑定到绑定组。注意该成员会占用多个绑定槽,详见相关定义。

注意:
GPUExternalTexture 可以在不复制导入源的情况下实现,但这取决于实现定义因素。底层表示的所有权可能是独占的,也可能与其他所有者(如视频解码器)共享,但这对应用透明。

外部纹理的底层表示不可见(除精确采样行为外),但通常包括:

实现内部使用的配置在不同时间、系统、用户代理、媒体源、甚至同一个视频资源的帧之间可能不一致。为适应各种可能的实现,每个外部纹理绑定都会保守地占用:

[Exposed=(Window, Worker), SecureContext]
interface GPUExternalTexture {
};
GPUExternalTexture includes GPUObjectBase;

GPUExternalTexture 拥有如下不可变属性

[[descriptor]],类型为GPUExternalTextureDescriptor,只读

创建该纹理时所用的 descriptor。

GPUExternalTexture 拥有如下不可变属性

[[expired]],类型为 boolean,初始值为 false

表示对象是否已过期(不可再用)。

注意:[[destroyed]] 不同,该值可从 true 恢复到 false

6.4.1. 导入外部纹理

外部纹理可通过 importExternalTexture() 从外部视频对象创建。

HTMLVideoElement 创建的外部纹理会在导入后自动过期(即销毁),而不像其他资源那样需要手动或等垃圾回收。外部纹理过期时,其 [[expired]] 属性变为 true

VideoFrame 创建的外部纹理会在且仅在源 VideoFrame关闭时过期(无论是通过 close() 还是其他方式)。

注意:decode() 所述,开发者在输出 VideoFrame 上调用 close(),以避免解码器阻塞。如果导入的 VideoFrame 被丢弃但未关闭,导入的 GPUExternalTexture 会保持其存活,直到它也被丢弃。只有两者都被丢弃后,VideoFrame 才能被垃圾回收。由于垃圾回收不可预测,这仍然可能导致解码器阻塞。

一旦 GPUExternalTexture 过期,必须再次调用 importExternalTexture()。不过,用户代理可能会取消过期,返回同一个 GPUExternalTexture,而不是新建一个。通常只有应用调度与视频帧率同步(如用 requestVideoFrameCallback())时才会避免返回同一个对象。如返回同一对象,二者相等,且引用旧对象的 GPUBindGroupGPURenderBundle 等依然有效。

dictionary GPUExternalTextureDescriptor
 : GPUObjectDescriptorBase {
required (HTMLVideoElement or VideoFrame) source;
PredefinedColorSpace colorSpace = "srgb";
};

GPUExternalTextureDescriptor 字典含如下成员:

source类型为 (HTMLVideoElement or VideoFrame)

要导入为外部纹理的视频源。源尺寸详见外部源尺寸表。

colorSpace类型为 PredefinedColorSpace,默认 "srgb"

读取时,source 的图像内容会被转换到的色彩空间。

importExternalTexture(descriptor)

创建一个包裹所提供图像源的 GPUExternalTexture

调用对象: GPUDevice this

参数:

GPUDevice.importExternalTexture(descriptor) 方法参数
参数 类型 可为 null 可选 描述
descriptor GPUExternalTextureDescriptor 提供外部图像源对象及相关创建选项。

返回: GPUExternalTexture

内容时间线步骤:

  1. source = descriptor.source

  2. source 的当前图像内容与最近一次用同一 descriptor(忽略 label)调用 importExternalTexture() 相同,且用户代理选择复用:

    1. previousResult 为上次返回的 GPUExternalTexture

    2. previousResult.[[expired]]false,恢复底层资源所有权。

    3. result = previousResult

    注意:这样应用可检测重复导入,避免重新创建依赖对象(如 GPUBindGroup)。实现仍需支持同一帧被多个 GPUExternalTexture 包裹,因为导入元数据如 colorSpace 可变。

    否则:

    1. source 不为 origin-clean,抛出 SecurityError 并返回。

    2. usability = ? 检查图像参数可用性(source)。

    3. usabilitygood

      1. 生成校验错误

      2. 返回无效 GPUExternalTexture

    4. data 为将 source 当前图像内容转换为 descriptor.colorSpace(未预乘 alpha)所得。

      可能导致超出 [0,1] 范围的值。如需裁剪,可采样后再做。

      注意:虽描述为拷贝,也可实现为指向只读底层数据和元数据的引用。

    5. result 为包裹 data 的新 GPUExternalTexture

  3. sourceHTMLVideoElement排队自动过期任务,内容包括:

    1. result.[[expired]]true,释放底层资源所有权。

    注意:应在采样同一任务中导入 HTMLVideoElement,建议结合 requestVideoFrameCallbackrequestAnimationFrame() 使用,否则纹理可能在应用用完前被销毁。

  4. sourceVideoFrame,则当 source 关闭时,执行:

    1. result.[[expired]]true

  5. result.label = descriptor.label

  6. 返回 result

使用 video 元素外部纹理以页面动画帧率渲染:
const videoElement = document.createElement('video');
// ... 设置 videoElement,等待其就绪 ...

function frame() {
    requestAnimationFrame(frame);

    // 每帧都重新导入视频,因为导入的纹理很可能已过期。
    // 浏览器可能缓存并复用旧帧,如复用会返回同一个 GPUExternalTexture。
    // 这时旧的 bind group 依然有效。
    const externalTexture = gpuDevice.importExternalTexture({
        source: videoElement
    });

    // ... 使用 externalTexture 渲染 ...
}
requestAnimationFrame(frame);
若有 requestVideoFrameCallback,则以视频帧率渲染 video 元素外部纹理:
const videoElement = document.createElement('video');
// ... 设置 videoElement ...

function frame() {
    videoElement.requestVideoFrameCallback(frame);

    // 帧已推进,需重新导入
    const externalTexture = gpuDevice.importExternalTexture({
        source: videoElement
    });

    // ... 使用 externalTexture 渲染 ...
}
videoElement.requestVideoFrameCallback(frame);

6.5. 采样外部纹理绑定

externalTexture 绑定点允许绑定 GPUExternalTexture 对象(来自视频等动态图像源)。同时也支持 GPUTextureGPUTextureView

注意:GPUTextureGPUTextureView 被绑定到 externalTexture 绑定点时,其行为等同于单一 RGBA plane、无裁剪、无旋转、无颜色转换的 GPUExternalTexture

外部纹理在 WGSL 中以 texture_external 表示,可通过 textureLoadtextureSampleBaseClampToEdge 读取。

textureSampleBaseClampToEdge 所用的 sampler 用于采样底层纹理。

绑定资源类型GPUExternalTexture 时,采样结果在 colorSpace 设置的色彩空间中。对于具体外部纹理,采样器(及其过滤操作)是在底层值转换到指定色彩空间之前还是之后应用,是实现相关的。

注意: 若内部表示为 RGBA 平面,则采样行为与常规 2D 纹理一致。若有多个底层平面(如 Y+UV),采样器会分别对每个底层纹理采样,再进行 YUV 到指定色彩空间的转换。

7. 采样器(Samplers)

7.1. GPUSampler

GPUSampler 用于编码可在着色器中解释纹理资源数据的变换与过滤信息。

GPUSampler 通过 createSampler() 创建。

[Exposed=(Window, Worker), SecureContext]
interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

GPUSampler 拥有如下不可变属性

[[descriptor]],类型为 GPUSamplerDescriptor,只读

创建 GPUSampler 时使用的 GPUSamplerDescriptor

[[isComparison]],类型为 boolean,只读

GPUSampler 是否为比较采样器。

[[isFiltering]],类型为 boolean,只读

GPUSampler 是否对纹理的多个采样结果加权(即支持过滤)。

7.1.1. GPUSamplerDescriptor

GPUSamplerDescriptor 指定了创建 GPUSampler 时应使用的选项。

dictionary GPUSamplerDescriptor
         : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUMipmapFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 32;
    GPUCompareFunction compare;
    [Clamp] unsigned short maxAnisotropy = 1;
};
addressModeU, 类型为 GPUAddressMode,默认值为 "clamp-to-edge"
addressModeV, 类型为 GPUAddressMode,默认值为 "clamp-to-edge"
addressModeW, 类型为 GPUAddressMode,默认值为 "clamp-to-edge"

分别指定纹理在宽度、高度和深度坐标方向上的 地址模式

magFilter, 类型为 GPUFilterMode,默认值为 "nearest"

指定当采样区域小于等于一个纹素时的采样行为。

minFilter, 类型为 GPUFilterMode,默认值为 "nearest"

指定当采样区域大于一个纹素时的采样行为。

mipmapFilter, 类型为 GPUMipmapFilterMode,默认值为 "nearest"

指定在 mipmap 层级之间采样时的行为。

lodMinClamp, 类型为 float,默认值为 0
lodMaxClamp, 类型为 float,默认值为 32

分别指定采样纹理时内部使用的最小和最大细节层级(LOD)

compare, 类型为 GPUCompareFunction

如果提供,则采样器将被设为比较采样器,并使用指定的 GPUCompareFunction

注意:比较采样器可以使用过滤,但采样结果依赖于实现,可能不同于普通过滤规则。

maxAnisotropy, 类型为 unsigned short,默认值为 1

指定采样器使用的最大各向异性值上限。当 maxAnisotropy 大于 1 且实现支持时,启用各向异性过滤。

各向异性过滤可提高在斜视角下采样纹理的图像质量。更高的 maxAnisotropy 值表示过滤时支持的最大各向异性比。

注意:
大多数实现支持 maxAnisotropy 取值范围为 1 到 16(含)。实际使用值将被限制在平台支持的最大值。

具体的过滤行为依赖于实现。

细节层级(Level of detail,LOD) 描述了采样纹理时选用的 mip 层级。可以通过 textureSampleLevel 等着色器方法显式指定,也可以由纹理坐标导数隐式决定。

注意:参考 Scale Factor Operation, LOD Operation and Image Level SelectionVulkan 1.3 规范)了解隐式 LOD 如何计算。

GPUAddressMode 描述了采样纹理时纹素超出纹理边界时的采样器行为。

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat",
};
"clamp-to-edge"

纹理坐标被限制在 0.0 到 1.0 区间内。

"repeat"

纹理坐标将在纹理另一侧环绕。

"mirror-repeat"

纹理坐标将在纹理另一侧环绕,但当坐标的整数部分为奇数时,纹理会翻转。

GPUFilterModeGPUMipmapFilterMode 描述采样区域不正好覆盖一个纹素时采样器的行为。

注意:参考 Texel FilteringVulkan 1.3 规范)了解采样器如何确定在不同过滤模式下采样哪些纹素。

enum GPUFilterMode {
    "nearest",
    "linear",
};

enum GPUMipmapFilterMode {
    "nearest",
    "linear",
};
"nearest"

返回最接近纹理坐标的纹素值。

"linear"

每个维度选择两个纹素,并返回它们值的线性插值。

GPUCompareFunction 指定比较采样器的行为。如果在着色器中使用比较采样器,depth_ref 会与获取到的纹素值进行比较,生成比较结果(通过则为 1.0f,未通过为 0.0f)。

比较后,如果启用纹理过滤,则会进行过滤步骤,使多个比较结果混合,结果在 [0, 1] 区间。过滤应当如常处理,但可能以更低精度或不做混合。

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always",
};
"never"

比较测试始终不通过。

"less"

当提供值小于采样值时,比较测试通过。

"equal"

当提供值等于采样值时,比较测试通过。

"less-equal"

当提供值小于等于采样值时,比较测试通过。

"greater"

当提供值大于采样值时,比较测试通过。

"not-equal"

当提供值不等于采样值时,比较测试通过。

"greater-equal"

当提供值大于等于采样值时,比较测试通过。

"always"

比较测试始终通过。

7.1.2. 采样器创建

createSampler(descriptor)

创建一个 GPUSampler

调用对象: GPUDevice this。

参数:

参数表:GPUDevice.createSampler(descriptor) 方法。
参数名 类型 可为 null 可选 描述
descriptor GPUSamplerDescriptor 要创建的 GPUSampler 的描述信息。

返回: GPUSampler

内容时间线步骤:

  1. s = ! 创建新的 WebGPU 对象(this, GPUSampler, descriptor)。

  2. 设备时间线 上对 this 执行 初始化步骤

  3. 返回 s

设备时间线 初始化步骤
  1. 如果下列任一条件不满足,则生成校验错误使 s 无效 并返回。

  2. 设置 s.[[descriptor]]descriptor

  3. 如果 s.[[descriptor]]compare 属性为 null 或未定义,则设置 s.[[isComparison]]false。否则设置为 true

  4. 如果 minFiltermagFiltermipmapFilter 都不是 "linear",则设置 s.[[isFiltering]]false;否则设为 true

创建一个支持三线性过滤并重复纹理坐标的 GPUSampler
const sampler = gpuDevice.createSampler({
    addressModeU: 'repeat',
    addressModeV: 'repeat',
    magFilter: 'linear',
    minFilter: 'linear',
    mipmapFilter: 'linear',
});

8. 资源绑定

8.1. GPUBindGroupLayout

GPUBindGroupLayout 定义了绑定在 GPUBindGroup 中的一组资源与着色器阶段可访问性的接口关系。

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

GPUBindGroupLayout 具有以下不可变属性

[[descriptor]],类型为 GPUBindGroupLayoutDescriptor, 只读

8.1.1. 绑定组布局创建

GPUBindGroupLayout 通过 GPUDevice.createBindGroupLayout() 创建。

dictionary GPUBindGroupLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

GPUBindGroupLayoutDescriptor 字典包含以下成员:

entries类型为 sequence<GPUBindGroupLayoutEntry>

用于描述绑定组内着色器资源绑定的条目列表。

GPUBindGroupLayoutEntry 描述了要包含在 GPUBindGroupLayout 中的单个着色器资源绑定。

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;

    GPUBufferBindingLayout buffer;
    GPUSamplerBindingLayout sampler;
    GPUTextureBindingLayout texture;
    GPUStorageTextureBindingLayout storageTexture;
    GPUExternalTextureBindingLayout externalTexture;
};

GPUBindGroupLayoutEntry 字典包含以下成员:

binding类型为 GPUIndex32

绑定组布局内资源绑定的唯一标识符,对应于 GPUBindGroupEntry.binding 以及 GPUShaderModule 中的 @binding 属性。

visibility类型为 GPUShaderStageFlags

GPUShaderStage 各成员组成的位集合。每个被设置的位表示该资源在对应着色器阶段可访问。

buffer类型为 GPUBufferBindingLayout
sampler类型为 GPUSamplerBindingLayout
texture类型为 GPUTextureBindingLayout
storageTexture类型为 GPUStorageTextureBindingLayout
externalTexture类型为 GPUExternalTextureBindingLayout

这些成员中必须且只能设置一个,表示绑定类型。该成员的内容指定该类型的选项。

createBindGroup() 中对应的资源必须为该绑定类型的binding resource type

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUShaderStage {
    const GPUFlagsConstant VERTEX   = 0x1;
    const GPUFlagsConstant FRAGMENT = 0x2;
    const GPUFlagsConstant COMPUTE  = 0x4;
};

GPUShaderStage 包含以下标志,指示此 GPUBindGroupEntry 对应的 GPUBindGroupLayoutEntry 可见于哪些着色器阶段:

VERTEX

绑定组条目在顶点着色器中可访问。

FRAGMENT

绑定组条目在片元着色器中可访问。

COMPUTE

绑定组条目在计算着色器中可访问。

GPUBindGroupLayoutEntrybinding member 由其定义的成员决定: buffersamplertexturestorageTexture, 或 externalTexture。 每个 GPUBindGroupLayoutEntry 只能定义一个成员。 每个成员有其对应的 GPUBindingResource 类型,每个绑定类型对应一个内部用法,见下表:

绑定成员 资源类型 绑定类型
绑定用法
buffer GPUBufferBinding
(或 GPUBuffer 作为简写)
"uniform" constant
"storage" storage
"read-only-storage" storage-read
sampler GPUSampler "filtering" constant
"non-filtering"
"comparison"
texture GPUTextureView
(或 GPUTexture 作为简写)
"float" constant
"unfilterable-float"
"depth"
"sint"
"uint"
storageTexture GPUTextureView
(或 GPUTexture 作为简写)
"write-only" storage
"read-write"
"read-only" storage-read
externalTexture GPUExternalTexture
GPUTextureView
(或 GPUTexture 作为简写)
constant
列表 GPUBindGroupLayoutEntryentries 超出绑定槽限制(exceeds the binding slot limits),指当用于某一限制的槽数超过 支持的限制 limits 中的支持值时。 每个 entry 可能会占用多个限制的多个槽。

设备时间线步骤:

  1. 对于 entries 中的每个 entry,如果:

    entry.buffer?.type"uniform"entry.buffer?.hasDynamicOffsettrue

    视为使用了 1 个 maxDynamicUniformBuffersPerPipelineLayout 槽。

    entry.buffer?.type"storage"entry.buffer?.hasDynamicOffsettrue

    视为使用了 1 个 maxDynamicStorageBuffersPerPipelineLayout 槽。

  2. 对于每个着色器阶段 stage,即 « VERTEX, FRAGMENT, COMPUTE »:

    1. 对于每个 entriesentry,若其 entry.visibility 包含 stage,则:

      entry.buffer?.type"uniform"

      视为使用了 1 个 maxUniformBuffersPerShaderStage 槽。

      entry.buffer?.type"storage""read-only-storage"

      视为使用了 1 个 maxStorageBuffersPerShaderStage 槽。

      entry.sampler 存在(provided

      视为使用了 1 个 maxSamplersPerShaderStage 槽。

      entry.texture 存在

      视为使用了 1 个 maxSampledTexturesPerShaderStage 槽。

      entry.storageTexture 存在

      视为使用了 1 个 maxStorageTexturesPerShaderStage 槽。

      entry.externalTexture 存在

      视为使用 4 个 maxSampledTexturesPerShaderStage 槽, 1 个 maxSamplersPerShaderStage 槽,以及 1 个 maxUniformBuffersPerShaderStage 槽。

      注意:详见 GPUExternalTexture 相关说明。

enum GPUBufferBindingType {
    "uniform",
    "storage",
    "read-only-storage",
};

dictionary GPUBufferBindingLayout {
    GPUBufferBindingType type = "uniform";
    boolean hasDynamicOffset = false;
    GPUSize64 minBindingSize = 0;
};

GPUBufferBindingLayout 字典包含以下成员:

type类型为 GPUBufferBindingType,默认值为 "uniform"

指示绑定到该绑定点的缓冲区所需的类型。

hasDynamicOffset类型为 boolean,默认值为 false

指示该绑定是否需要动态偏移。

minBindingSize类型为 GPUSize64,默认值为 0

指示与该绑定点使用的缓冲区绑定的最小 size

createBindGroup() 中总会对绑定进行该尺寸的校验。

若该值不为 0,则管线创建时还会额外校验 该值是否大于等于变量的最小缓冲区绑定大小

若该值 0,则管线创建时忽略,转而由 draw/dispatch 指令 校验绑定组中每个绑定是否满足变量的最小缓冲区绑定大小

注意: 理论上,类似的运行时校验也可用于其它用于早期校验的绑定相关字段,如 sampleTypeformat,但目前仅在管线创建时校验。 不过这样的运行时校验可能开销大或不必要复杂,因此仅对 minBindingSize 启用,因为它对易用性影响最大。

enum GPUSamplerBindingType {
    "filtering",
    "non-filtering",
    "comparison",
};

dictionary GPUSamplerBindingLayout {
    GPUSamplerBindingType type = "filtering";
};

GPUSamplerBindingLayout 字典包含以下成员:

type类型为 GPUSamplerBindingType,默认值为 "filtering"

指示绑定到此绑定点所需采样器的类型。

enum GPUTextureSampleType {
    "float",
    "unfilterable-float",
    "depth",
    "sint",
    "uint",
};

dictionary GPUTextureBindingLayout {
    GPUTextureSampleType sampleType = "float";
    GPUTextureViewDimension viewDimension = "2d";
    boolean multisampled = false;
};

GPUTextureBindingLayout 字典包含以下成员:

sampleType类型为 GPUTextureSampleType,默认值为 "float"

指示绑定到该绑定点的纹理视图所需的类型。

viewDimension类型为 GPUTextureViewDimension,默认值为 "2d"

指示绑定到该绑定点的纹理视图所需的 dimension

multisampled类型为 boolean,默认值为 false

指示绑定到该绑定点的纹理视图是否必须为多重采样。

enum GPUStorageTextureAccess {
    "write-only",
    "read-only",
    "read-write",
};

dictionary GPUStorageTextureBindingLayout {
    GPUStorageTextureAccess access = "write-only";
    required GPUTextureFormat format;
    GPUTextureViewDimension viewDimension = "2d";
};

GPUStorageTextureBindingLayout 字典包含以下成员:

access类型为 GPUStorageTextureAccess,默认值为 "write-only"

此绑定的访问模式,指示可读性和可写性。

format类型为 GPUTextureFormat

绑定到此绑定点的纹理视图所需的 format

viewDimension类型为 GPUTextureViewDimension,默认值为 "2d"

绑定到此绑定点的纹理视图所需的 dimension

dictionary GPUExternalTextureBindingLayout {
};

GPUBindGroupLayout 对象拥有以下设备时间线属性

[[entryMap]],类型为 有序映射<GPUSize32, GPUBindGroupLayoutEntry>, 只读

指向该 GPUBindGroupLayout 所描述的 GPUBindGroupLayoutEntry 的绑定索引映射表。

[[dynamicOffsetCount]],类型为 GPUSize32, 只读

GPUBindGroupLayout 中具有动态偏移的缓冲区绑定数量。

[[exclusivePipeline]],类型为 GPUPipelineBase? ,只读

如果是作为默认管线布局一部分创建的,则为创建该 GPUBindGroupLayout 的管线。若不为 null,则用该 GPUBindGroupLayout 创建的 GPUBindGroup 只能与指定的 GPUPipelineBase 一起使用。

createBindGroupLayout(descriptor)

创建一个 GPUBindGroupLayout

调用对象: GPUDevice this

参数:

GPUDevice.createBindGroupLayout(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPUBindGroupLayoutDescriptor 要创建的 GPUBindGroupLayout 的描述信息。

返回: GPUBindGroupLayout

内容时间线步骤:

  1. 对于 descriptor.entries 中的每个 GPUBindGroupLayoutEntry entry

    1. 如果 entry.storageTexture提供

      1. ? 校验纹理格式所需特性,针对 entry.storageTexture.formatthis.[[device]]

  2. layout = ! 创建 WebGPU 对象(this, GPUBindGroupLayout, descriptor)。

  3. this设备时间线上执行 初始化步骤

  4. 返回 layout

设备时间线 初始化步骤
  1. 若下列任意条件不满足,则生成校验错误使 layout 无效并返回。

  2. 设置 layout.[[descriptor]]descriptor

  3. 设置 layout.[[dynamicOffsetCount]]descriptorbuffer 被提供,且 buffer.hasDynamicOffsettrue 的条目数量。

  4. 设置 layout.[[exclusivePipeline]]null

  5. 对于 descriptor.entries 中的每个 GPUBindGroupLayoutEntry entry

    1. entry.binding 作为 key,将 entry 插入 layout.[[entryMap]]

8.1.2. 兼容性

两个 GPUBindGroupLayout 对象 ab 被认为是 组等价(group-equivalent) 当且仅当满足以下所有条件:

如果绑定组布局是组等价的,则它们可以在所有场合下互换使用。

8.2. GPUBindGroup

GPUBindGroup 定义了一组要共同绑定的资源,以及这些资源在着色器阶段的使用方式。

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

GPUBindGroup 具有以下设备时间线属性

[[layout]],类型为 GPUBindGroupLayout, 只读

与该 GPUBindGroup 相关联的 GPUBindGroupLayout

[[entries]],类型为 sequence<GPUBindGroupEntry>, 只读

GPUBindGroup 所描述的所有 GPUBindGroupEntry 集合。

[[usedResources]],类型为 usage scope,只读

该绑定组使用的缓冲区和纹理子资源集合, 并关联有内部用法标志列表。

GPUBindGroup bindGroup已绑定缓冲区范围(bound buffer ranges), 给定 list<GPUBufferDynamicOffset> dynamicOffsets,按如下方式计算:
  1. result 为新的 集合<(GPUBindGroupLayoutEntry, GPUBufferBinding)>。

  2. dynamicOffsetIndex 为 0。

  3. bindGroup.[[entries]] 中每个 GPUBindGroupEntry bindGroupEntry(按 bindGroupEntry.binding 排序):

    1. bindGroupLayoutEntry = bindGroup.[[layout]].[[entryMap]][bindGroupEntry.binding]。

    2. bindGroupLayoutEntry.buffer被提供则 continue

    3. bound = get as buffer binding(bindGroupEntry.resource)。

    4. bindGroupLayoutEntry.buffer.hasDynamicOffset

      1. bound.offset 增加 dynamicOffsets[dynamicOffsetIndex]。

      2. dynamicOffsetIndex 增加 1。

    5. (bindGroupLayoutEntry, bound) 添加到 result

  4. 返回 result

8.2.1. 绑定组创建

GPUBindGroup 通过 GPUDevice.createBindGroup() 创建。

dictionary GPUBindGroupDescriptor
         : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

GPUBindGroupDescriptor 字典包含以下成员:

layout类型为 GPUBindGroupLayout

该绑定组条目所遵循的 GPUBindGroupLayout

entries类型为 sequence<GPUBindGroupEntry>

描述要为 layout 所描述的每个绑定暴露给着色器的资源条目的列表。

typedef (GPUSampler or
         GPUTexture or
         GPUTextureView or
         GPUBuffer or
         GPUBufferBinding or
         GPUExternalTexture) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};

GPUBindGroupEntry 描述要绑定到 GPUBindGroup 的单个资源,包含以下成员:

binding类型为 GPUIndex32

该绑定组内资源绑定的唯一标识符,对应于 GPUBindGroupLayoutEntry.binding 以及 GPUShaderModule 中的 @binding 属性。

resource类型为 GPUBindingResource

要绑定的资源,可以是 GPUSamplerGPUTextureGPUTextureViewGPUBufferGPUBufferBinding, 或 GPUExternalTexture

GPUBindGroupEntry 具有以下设备时间线属性

[[prevalidatedSize]],类型为 boolean

该绑定条目在创建时其缓冲区大小是否已被校验。

dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

GPUBufferBinding 描述了要作为资源绑定的缓冲区及可选范围,包含以下成员:

buffer类型为 GPUBuffer

要绑定的 GPUBuffer

offset类型为 GPUSize64,默认值为 0

以字节为单位,buffer 起始到绑定暴露给着色器的范围起点的偏移量。

size类型为 GPUSize64

缓冲区绑定的大小(字节)。 若未提供,则指定从 offset 起至 buffer 末尾的区间。

createBindGroup(descriptor)

创建一个 GPUBindGroup

调用对象: GPUDevice this

参数:

GPUDevice.createBindGroup(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPUBindGroupDescriptor 要创建的 GPUBindGroup 的描述信息。

返回: GPUBindGroup

内容时间线步骤:

  1. bindGroup = ! 创建新的 WebGPU 对象(this, GPUBindGroup, descriptor)。

  2. this设备时间线上执行 初始化步骤

  3. 返回 bindGroup

设备时间线 初始化步骤
  1. limits = this.[[device]].[[limits]]

  2. 若下列任一条件不满足,则生成校验错误使 bindGroup 无效并返回。

    • descriptor.layout 必须可与 this 配合使用。

    • descriptor.entries 的数量与 descriptor.layout 的条目数量完全相等。

    对于 descriptor.entries 中的每个 GPUBindGroupEntry bindingDescriptor

  3. bindGroup.[[layout]] = descriptor.layout

  4. bindGroup.[[entries]] = descriptor.entries

  5. bindGroup.[[usedResources]] = {}。

  6. descriptor.entries 中的每个 GPUBindGroupEntry bindingDescriptor

    1. internalUsagelayoutBinding绑定用法

    2. resource 所见的每个子资源,都以 internalUsage 形式添加到 [[usedResources]]

    3. layoutBindingbinding memberbufferlayoutBinding.buffer.minBindingSize0,则 bindingDescriptor.[[prevalidatedSize]] 设为 false,否则设为 true

作为纹理视图获取(get as texture view)(resource)

参数:

返回: GPUTextureView

  1. 断言 resource 必须为 GPUTextureGPUTextureView

  2. 如果 resource 为:

    GPUTexture
    1. 返回 resource.createView()

    GPUTextureView
    1. 返回 resource

作为缓冲区绑定获取(get as buffer binding)(resource)

参数:

返回: GPUBufferBinding

  1. 断言 resource 必须为 GPUBufferGPUBufferBinding

  2. 如果 resource 为:

    GPUBuffer
    1. bufferBinding 为新的 GPUBufferBinding

    2. 设置 bufferBinding.bufferresource

    3. 返回 bufferBinding

    GPUBufferBinding
    1. 返回 resource

有效缓冲区绑定大小(effective buffer binding size)(binding)

参数:

返回: GPUSize64

  1. 如果 binding.size提供

    1. 返回 max(0, binding.buffer.size - binding.offset);

  2. 返回 binding.size

两个 GPUBufferBinding 对象 ab 被认为是 缓冲区绑定别名(buffer-binding-aliasing),当且仅当满足以下全部条件:

注意:进行该计算时,任何动态偏移已应用到范围上。

8.3. GPUPipelineLayout

GPUPipelineLayout 定义了在命令编码期间通过 setBindGroup() 设置的所有 GPUBindGroup 对象的资源,与通过 GPURenderCommandsMixin.setPipelineGPUComputePassEncoder.setPipeline 设置的管线着色器之间的映射关系。

资源的完整绑定地址可以定义为以下三元组:

  1. 资源可见的着色器阶段掩码

  2. 绑定组索引(bind group index)

  3. 绑定号(binding number)

该地址的各分量也可视为管线的绑定空间。一个 GPUBindGroup (与相应的 GPUBindGroupLayout 一起) 覆盖了某个固定绑定组索引的空间。其包含的绑定需要是该绑定组索引上着色器所用资源的超集。

[Exposed=(Window, Worker), SecureContext]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

GPUPipelineLayout 具有以下设备时间线属性

[[bindGroupLayouts]],类型为 list<GPUBindGroupLayout>, 只读

在创建时通过 GPUPipelineLayoutDescriptor.bindGroupLayouts 提供的 GPUBindGroupLayout 对象列表。

注意:为多个 GPURenderPipelineGPUComputePipeline 管线复用同一个 GPUPipelineLayout 可确保在切换这些管线时,用户代理无需在内部重新绑定资源。

GPUComputePipeline 对象 X 是用 GPUPipelineLayout.bindGroupLayouts A、B、C 创建的。GPUComputePipeline 对象 Y 是用 GPUPipelineLayout.bindGroupLayouts A、D、C 创建的。假设命令编码顺序有两次 dispatch:
  1. setBindGroup(0, ...)

  2. setBindGroup(1, ...)

  3. setBindGroup(2, ...)

  4. setPipeline(X)

  5. dispatchWorkgroups()

  6. setBindGroup(1, ...)

  7. setPipeline(Y)

  8. dispatchWorkgroups()

在该场景中,即使 GPUPipelineLayout.bindGroupLayouts 索引 2 处的 GPUBindGroupLayout 或槽 2 处的 GPUBindGroup 都未变化,用户代理仍需为第二次 dispatch 重新绑定组槽 2。

注意: GPUPipelineLayout 的建议用法是将最常用且最少变动的绑定组放在布局“底部”,即绑定组槽号较低的位置(如 0 或 1)。绑定组在绘制调用间变动越频繁,其索引应越高。该通用建议有助于用户代理最小化绘制调用间的状态切换,从而降低 CPU 开销。

8.3.1. 管线布局创建

GPUPipelineLayout 通过 GPUDevice.createPipelineLayout() 创建。

dictionary GPUPipelineLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout?> bindGroupLayouts;
};

GPUPipelineLayoutDescriptor 字典定义了管线所用的所有 GPUBindGroupLayout,包含以下成员:

bindGroupLayouts类型为 sequence<GPUBindGroupLayout?>

管线将使用的可选 GPUBindGroupLayout 列表。每个元素对应 GPUShaderModule 中的 @group 属性,第 N 个元素对应 @group(N)

createPipelineLayout(descriptor)

创建一个 GPUPipelineLayout

调用对象: GPUDevice this

参数:

GPUDevice.createPipelineLayout(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPUPipelineLayoutDescriptor 要创建的 GPUPipelineLayout 的描述信息。

返回: GPUPipelineLayout

内容时间线步骤:

  1. pl = ! 创建新的 WebGPU 对象(this, GPUPipelineLayout, descriptor)。

  2. this设备时间线上执行 初始化步骤

  3. 返回 pl

设备时间线 初始化步骤
  1. limits = this.[[device]].[[limits]]

  2. bindGroupLayouts 为一个包含 nullGPUBindGroupLayout 列表, 大小等于 limits.maxBindGroups

  3. descriptor.bindGroupLayouts 中每个索引 ibindGroupLayout

    1. 如果 bindGroupLayoutnullbindGroupLayout.[[descriptor]].entries 非空:

      1. 设置 bindGroupLayouts[i] 为 bindGroupLayout

  4. allEntries 为所有非 null bglbgl.[[descriptor]].entries 拼接后的结果。

  5. 若下列任一条件不满足,则生成校验错误使 pl 无效并返回。

  6. 设置 pl.[[bindGroupLayouts]]bindGroupLayouts

注意: 若两个 GPUPipelineLayout 的内部 [[bindGroupLayouts]] 序列中包含的每个 GPUBindGroupLayout 对象两两 组等价,则它们对任意用途都是等价的。

8.4. 示例

创建一个 GPUBindGroupLayout ,描述具有 uniform buffer、纹理和采样器的绑定。 然后使用该 GPUBindGroupLayout 创建 GPUBindGroupGPUPipelineLayout
const bindGroupLayout = gpuDevice.createBindGroupLayout({
    entries: [{
        binding: 0,
        visibility: GPUShaderStage.VERTEX | GPUShaderStage.FRAGMENT,
        buffer: {}
    }, {
        binding: 1,
        visibility: GPUShaderStage.FRAGMENT,
        texture: {}
    }, {
        binding: 2,
        visibility: GPUShaderStage.FRAGMENT,
        sampler: {}
    }]
});

const bindGroup = gpuDevice.createBindGroup({
    layout: bindGroupLayout,
    entries: [{
        binding: 0,
        resource: { buffer: buffer },
    }, {
        binding: 1,
        resource: texture
    }, {
        binding: 2,
        resource: sampler
    }]
});

const pipelineLayout = gpuDevice.createPipelineLayout({
    bindGroupLayouts: [bindGroupLayout]
});

9. 着色器模块

9.1. GPUShaderModule

[Exposed=(Window, Worker), SecureContext]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> getCompilationInfo();
};
GPUShaderModule includes GPUObjectBase;

GPUShaderModule 是对内部着色器模块对象的引用。

9.1.1. 着色器模块创建

dictionary GPUShaderModuleDescriptor
         : GPUObjectDescriptorBase {
    required USVString code;
    sequence<GPUShaderModuleCompilationHint> compilationHints = [];
};
code类型为 USVString

该着色器模块的 WGSL 源代码。

compilationHints类型为 sequence<GPUShaderModuleCompilationHint>,默认值为 []

GPUShaderModuleCompilationHint 的列表。

应用程序提供的任何 hint 应当 包含有关将来会从该入口创建管线的某个 entry point 的信息。

实现 应当 使用 GPUShaderModuleCompilationHint 中的任何信息,尽可能在 createShaderModule() 内完成编译。

除了类型检查外,这些 hint 不会以任何方式被校验。

注意:
compilationHints 中提供信息不会产生除性能以外的可观察效果。为从未创建的管线提供 hint 可能对性能有害。

由于单个着色器模块可以包含多个入口点,并且可以从一个着色器模块创建多个管线,因此实现只在 createShaderModule() 时尽可能多地编译会更高效,而不是在多次 createComputePipeline()createRenderPipeline() 时多次编译。

hint 只应用于其明确命名的入口点。与 GPUProgrammableStage.entryPoint 不同,即使模块中只有一个入口点,这里也没有默认值。

注意: hint 不会以可观察方式被校验,但用户代理 可以 向开发者(比如在浏览器开发者工具)展示可识别错误(如未知入口点名或不兼容的管线布局)。

createShaderModule(descriptor)

创建一个 GPUShaderModule

调用对象: GPUDevice this。

参数:

GPUDevice.createShaderModule(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPUShaderModuleDescriptor 要创建的 GPUShaderModule 的描述信息。

返回: GPUShaderModule

内容时间线步骤:

  1. sm = ! 创建新的 WebGPU 对象(this, GPUShaderModule, descriptor)。

  2. this设备时间线上执行 初始化步骤

  3. 返回 sm

设备时间线 初始化步骤
  1. error 为使用 WGSL 源 descriptor.code 创建着色器模块时产生的任何错误,若无错误则为 null

  2. 若下列任一要求不满足,则生成校验错误使 sm 无效并返回。

    注意: 未分类错误不会在着色器模块创建时出现。实现如果在着色器模块创建阶段检测到此类错误,必须表现为着色器模块有效,并将错误推迟到管线创建阶段再暴露。

注意:
用户代理不应在此处校验错误的 message 文本中包含详细的编译器错误消息或着色器代码文本:这些详情可通过 getCompilationInfo() 访问。用户代理为开发者提供易读、格式化的错误详情(比如在浏览器开发者工具中以可展开警告展示完整着色器源码)。

由于生产环境中着色器编译错误应该很罕见,用户代理可以选择无论错误处理方式如何(GPU 错误作用域uncapturederror 事件处理器),都向开发者展示这些错误,例如以可展开警告形式。 如果不这样做,应当提供并记录其他方式让开发者访问这些可读错误详情,比如添加一个复选框总是显示错误,或在控制台记录 GPUCompilationInfo 对象时显示可读详情。

从 WGSL 代码创建 GPUShaderModule
// 一个简单的 vertex 和 fragment 着色器对,将用红色填充视口。
const shaderSource = `
    var<private> pos : array<vec2<f32>, 3> = array<vec2<f32>, 3>(
        vec2(-1.0, -1.0), vec2(-1.0, 3.0), vec2(3.0, -1.0));

    @vertex
    fn vertexMain(@builtin(vertex_index) vertexIndex : u32) -> @builtin(position) vec4<f32> {
        return vec4(pos[vertexIndex], 1.0, 1.0);
    }

    @fragment
    fn fragmentMain() -> @location(0) vec4<f32> {
        return vec4(1.0, 0.0, 0.0, 1.0);
    }
`;

const shaderModule = gpuDevice.createShaderModule({
    code: shaderSource,
});
9.1.1.1. 着色器模块编译提示

着色器模块编译提示是可选的附加信息,用于指明某个 GPUShaderModule 的入口点将来如何被使用。对于某些实现,这些信息可以帮助更早完成着色器模块编译,从而提升性能。

dictionary GPUShaderModuleCompilationHint {
    required USVString entryPoint;
    (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};
layout类型为 (GPUPipelineLayout or GPUAutoLayoutMode)

一个 GPUPipelineLayout ,表示 GPUShaderModule 在后续 createComputePipeline()createRenderPipeline() 调用中可能会与之配合使用。 如果设置为 "auto",则会为该 hint 关联的入口点使用默认管线布局

注意:
如果可能,作者应该在 createShaderModule()createComputePipeline() / createRenderPipeline() 之间提供一致的信息。

如果应用在调用 createShaderModule() 时无法提供 hint 信息,通常不应因此推迟调用 createShaderModule(),而应在 compilationHints 序列或单个 GPUShaderModuleCompilationHint 成员中省略未知信息。省略该信息可能导致编译被延迟到 createComputePipeline() / createRenderPipeline()

如果作者无法确保传给 createShaderModule() 的 hint 信息会与后续 createComputePipeline() / createRenderPipeline() 使用的信息一致,则应避免向 createShaderModule() 提供该信息,因为不匹配的 hint 可能导致不必要的编译发生。

9.1.2. 着色器模块编译信息

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info",
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
    readonly attribute unsigned long long offset;
    readonly attribute unsigned long long length;
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationInfo {
    readonly attribute FrozenArray<GPUCompilationMessage> messages;
};

GPUCompilationMessageGPUShaderModule 编译器生成的信息、警告或错误消息。这些消息面向开发者、可读性强,用于帮助诊断着色器 code 中的问题。每条消息可能对应着色器代码中的某个具体位置、某段子串,或根本不对应代码中的任何具体位置。

GPUCompilationMessage 具有以下属性:

message类型为 DOMString,只读

该编译消息的人类可读、可本地化文本

注意: message 应遵循字符串语言和方向元信息最佳实践,包括未来可能出现的相关标准。

编者注: 截至目前,还没有兼容并与传统 API 保持一致的语言/方向推荐方案,未来如有应正式采纳。

type类型为 GPUCompilationMessageType,只读

消息的严重级别。

如果 type"error", 则该消息对应 shader-creation error

lineNum类型为 unsigned long long,只读

消息对应的着色器 code 的行号。值为 1 起始,1 表示第一行。行由换行符分隔。

如果 message 对应某个子串,则此处为子串起始行。若消息不对应代码中任何具体点,则为 0

linePos类型为 unsigned long long,只读

从第 lineNum 行起始到消息对应点或子串起始的 UTF-16 单元偏移量。值为 1 起始,linePos1 表示该行第一个单元。

如对应子串,则指向子串第一个 UTF-16 单元。若消息不对应代码中任何具体点,则为 0

offset类型为 unsigned long long,只读

从着色器 code 起始到消息对应点或子串起始的 UTF-16 单元偏移量。必须与 lineNumlinePos 指向同一位置。若消息不对应代码中任何具体点,则为 0

length类型为 unsigned long long,只读

消息对应子串的 UTF-16 单元数量。若消息不对应子串则为 0

注意: GPUCompilationMessage.lineNumGPUCompilationMessage.linePos 都是从 1 起始,便于与大多数文本编辑器的行列号一致。

注意: GPUCompilationMessage.offsetGPUCompilationMessage.length 可直接传给 substr(),取出消息对应的着色器 code 子串。

getCompilationInfo()

返回 GPUShaderModule 编译期间的所有消息。

消息的位置、顺序和内容均为实现自定义,不保证以 lineNum 排序。

调用对象: GPUShaderModule this

返回: Promise<GPUCompilationInfo>

内容时间线步骤:

  1. contentTimeline 为当前 内容时间线

  2. promise新 Promise

  3. this设备时间线上执行 同步步骤

  4. 返回 promise

设备时间线 同步步骤
  1. eventthis着色器模块创建完成(无论成功与否)时发生。

  2. 监听时间线事件 eventthis.[[device]], 后续步骤在 contentTimeline 上执行。

内容时间线步骤:
  1. info 为新的 GPUCompilationInfo

  2. messagesthis着色器模块创建期间产生的所有错误、警告或信息消息(如设备丢失则为 [])。

  3. 对于 messages 中每条 message

    1. m 为新的 GPUCompilationMessage

    2. 设置 m.messagemessage 的文本。

    3. 如果 messageshader-creation error

      设置 m.type"error"

      如果是 warning:

      设置 m.type"warning"

      否则:

      设置 m.type"info"

    4. 如果 message 对应着色器 code 的具体子串或位置:
      1. 设置 m.lineNum 为消息指向的第一行(从 1 起)。

      2. 设置 m.linePos 为该行对应 UTF-16 单元的编号(从 1 起),若消息针对整行则为 1

      3. 设置 m.offset 为从代码开头到子串或位置的 UTF-16 单元数。

      4. 设置 m.length 为子串的 UTF-16 长度,若针对位置则为 0。

      否则:
      1. 设置 m.lineNum0

      2. 设置 m.linePos0

      3. 设置 m.offset0

      4. 设置 m.length0

    5. m 加入 info.messages

  4. resolve promiseinfo

10. 管线

管线(pipeline),无论是 GPUComputePipeline 还是 GPURenderPipeline, 都代表由 GPU 硬件、驱动和用户代理共同完成的完整功能流程:处理绑定和顶点缓冲区等输入数据,并产生输出(比如输出渲染目标的颜色)。

结构上,管线由一系列可编程阶段(着色器)和固定功能状态(如混合模式)组成。

注意: 在内部,具体实现平台可能会把部分固定功能状态转换成着色器代码,并与用户提供的着色器链接。这种链接是管线对象必须整体创建的原因之一。

这种组合状态作为单个对象创建(GPUComputePipelineGPURenderPipeline), 并可通过单条指令切换(分别为 GPUComputePassEncoder.setPipeline()GPURenderCommandsMixin.setPipeline())。

创建管线有两种方式:

同步(即时)管线创建

createComputePipeline()createRenderPipeline() 返回可立即在 pass encoder 中使用的管线对象。

失败时,管线对象无效,调用会产生 校验错误内部错误

注意: 返回的是句柄对象,但实际管线创建并非同步。如果管线创建耗时较长,可能在从创建到首次 setPipeline() 使用、 finish()、或 submit() 之间的某一处导致 设备时间线 阻塞,具体点未规定。

异步管线创建

createComputePipelineAsync()createRenderPipelineAsync() 返回 Promise,管线创建完成时解析为管线对象。

失败时,Promise 拒绝并返回 GPUPipelineError

GPUPipelineError 描述管线创建失败。

[Exposed=(Window, Worker), SecureContext, Serializable]
interface GPUPipelineError : DOMException {
    constructor(optional DOMString message = "", GPUPipelineErrorInit options);
    readonly attribute GPUPipelineErrorReason reason;
};

dictionary GPUPipelineErrorInit {
    required GPUPipelineErrorReason reason;
};

enum GPUPipelineErrorReason {
    "validation",
    "internal",
};

GPUPipelineError 构造函数:

constructor()
参数:
GPUPipelineError.constructor() 方法参数。
参数 类型 可为 null 可选 描述
message DOMString 基础 DOMException 的错误消息。
options GPUPipelineErrorInit 特定于 GPUPipelineError 的选项。

内容时间线步骤:

  1. 设置 this.name"GPUPipelineError"

  2. 设置 this.messagemessage

  3. 设置 this.reasonoptions.reason

GPUPipelineError 具有以下属性:

reason类型为 GPUPipelineErrorReason,只读

只读slot-backed 属性,暴露管线创建中遇到的错误类型,为 GPUPipelineErrorReason

GPUPipelineError 对象是可序列化对象

序列化步骤,给定 valueserialized,为:
  1. DOMException序列化步骤处理 valueserialized

反序列化步骤,给定 valueserialized,为:
  1. DOMException反序列化步骤处理 valueserialized

10.1. 基础管线

enum GPUAutoLayoutMode {
    "auto",
};

dictionary GPUPipelineDescriptorBase
         : GPUObjectDescriptorBase {
    required (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};
layout类型为 (GPUPipelineLayout or GPUAutoLayoutMode)

本管线的 GPUPipelineLayout, 或 "auto" 以自动生成管线布局。

注意: 若使用 "auto", 则此管线无法与其它管线共享 GPUBindGroup

interface mixin GPUPipelineBase {
    [NewObject] GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

GPUPipelineBase 具有以下设备时间线属性

[[layout]],类型为 GPUPipelineLayout

定义可与 this 配合使用的资源布局。

GPUPipelineBase 具有以下方法:

getBindGroupLayout(index)

获取与 GPUPipelineBaseindex 处的 GPUBindGroupLayout 兼容的 GPUBindGroupLayout

调用对象: GPUPipelineBase this

参数:

GPUPipelineBase.getBindGroupLayout(index) 方法参数。
参数 类型 可为 null 可选 描述
index unsigned long 管线布局 [[bindGroupLayouts]] 序列的索引。

返回: GPUBindGroupLayout

内容时间线步骤:

  1. layout 为新的 GPUBindGroupLayout 对象。

  2. this设备时间线上执行 初始化步骤

  3. 返回 layout

设备时间线 初始化步骤
  1. limits = this.[[device]].[[limits]]

  2. 若下列任一条件不满足,则生成校验错误使 layout 无效并返回。

  3. 初始化 layout 使其为 this.[[layout]].[[bindGroupLayouts]][index] 的副本。

    注意: GPUBindGroupLayout 永远是按值使用,不按引用,因此等价于返回同一个内部对象并创建新WebGPU 接口。每次都返回新 GPUBindGroupLayout WebGPU 接口,以避免 内容时间线设备时间线之间的往返。

10.1.1. 默认管线布局

如果 GPUPipelineBase 对象在创建时 layout 设置为 "auto",则会自动创建并使用一个默认布局。

注意: 默认布局为简易管线提供便利,大多数场景推荐显式布局。由默认布局创建的绑定组无法与其他管线共享,且默认布局结构在着色器变更时可能发生变化,导致绑定组创建异常。

GPUPipelineBase pipeline 创建默认管线布局设备时间线步骤如下:

  1. groupCount = 0。

  2. groupDescsdevice.[[limits]].maxBindGroups 个新的 GPUBindGroupLayoutDescriptor 对象序列。

  3. 对每个 groupDesc in groupDescs

    1. 设置 groupDesc.entries 为一个空的 序列

  4. 对创建 pipeline 时描述中的每个 GPUProgrammableStage stageDesc

    1. shaderStagestageDescpipeline 中对应的 GPUShaderStageFlags

    2. entryPoint = get the entry point(shaderStage, stageDesc)。断言 entryPointnull

    3. entryPoint 静态使用的每个资源 resource

      1. groupresource 的 "group" 标注。

      2. bindingresource 的 "binding" 标注。

      3. entry 为新的 GPUBindGroupLayoutEntry

      4. 设置 entry.bindingbinding

      5. 设置 entry.visibilityshaderStage

      6. resource 为采样器绑定:

        1. samplerLayout 为新的 GPUSamplerBindingLayout

        2. 设置 entry.samplersamplerLayout

      7. resource 为比较采样器绑定:

        1. samplerLayout 为新的 GPUSamplerBindingLayout

        2. 设置 samplerLayout.type"comparison"

        3. 设置 entry.samplersamplerLayout

      8. resource 为缓冲区绑定:

        1. bufferLayout 为新的 GPUBufferBindingLayout

        2. 设置 bufferLayout.minBindingSizeresource最小缓冲绑定大小

        3. 如果 resource 为只读存储缓冲区:

          1. 设置 bufferLayout.type"read-only-storage"

        4. 如果 resource 为存储缓冲区:

          1. 设置 bufferLayout.type"storage"

        5. 设置 entry.bufferbufferLayout

      9. resource 为采样纹理绑定:

        1. textureLayout 为新的 GPUTextureBindingLayout

        2. 如果 resource 为深度纹理绑定:

          否则,根据 resource 采样类型:

          f32 且存在对 resource静态使用,且在带采样器的纹理内建函数调用中

          设置 textureLayout.sampleType"float"

          f32 否则

          设置 textureLayout.sampleType"unfilterable-float"

          i32

          设置 textureLayout.sampleType"sint"

          u32

          设置 textureLayout.sampleType"uint"

        3. 设置 textureLayout.viewDimensionresource 的维度。

        4. resource 为多采样纹理:

          1. 设置 textureLayout.multisampledtrue

        5. 设置 entry.texturetextureLayout

      10. resource 为存储纹理绑定:

        1. storageTextureLayout 为新的 GPUStorageTextureBindingLayout

        2. 设置 storageTextureLayout.formatresource 的格式。

        3. 设置 storageTextureLayout.viewDimensionresource 的维度。

        4. 若访问模式为:

          read

          设置 textureLayout.access"read-only"

          write

          设置 textureLayout.access"write-only"

          read_write

          设置 textureLayout.access"read-write"

        5. 设置 entry.storageTexturestorageTextureLayout

      11. 设置 groupCount = max(groupCount, group + 1)。

      12. groupDescs[group] 已存在 binding 等于 binding 的条目 previousEntry

        1. entryvisibility 不同于 previousEntry

          1. entry.visibility 的比特位并入 previousEntry.visibility

        2. resource 为缓冲区绑定且 entrybuffer.minBindingSize 大于 previousEntry

          1. 设置 previousEntry.buffer.minBindingSizeentry.buffer.minBindingSize

        3. resource 为采样纹理绑定且 entrytexture.sampleTypepreviousEntry 不同,且二者均为 "float""unfilterable-float"

          1. 设置 previousEntry.texture.sampleType"float"

        4. entrypreviousEntry 其他属性不同:

          1. 返回 null(导致管线创建失败)。

        5. resource 为存储纹理绑定,entry.storageTexture.access"read-write"previousEntry.storageTexture.access"write-only",且格式兼容,见 § 26.1.1 普通色彩格式表:

          1. 设置 previousEntry.storageTexture.access"read-write"

      13. 否则

        1. entry 添加到 groupDescs[group]。

  5. groupLayouts 为新 列表

  6. i 从 0 到 groupCount - 1:

    1. groupDesc = groupDescs[i]。

    2. bindGroupLayout = device.createBindGroupLayout()(groupDesc)。

    3. 设置 bindGroupLayout.[[exclusivePipeline]]pipeline

    4. bindGroupLayout 添加到 groupLayouts

  7. desc 为新的 GPUPipelineLayoutDescriptor

  8. 设置 desc.bindGroupLayoutsgroupLayouts

  9. 返回 device.createPipelineLayout()(desc)。

10.1.2. GPUProgrammableStage

GPUProgrammableStage 描述了用户提供的 GPUShaderModule 中控制 管线某个可编程阶段的入口点。 入口点名称遵循 WGSL 标识符对比的规则。

dictionary GPUProgrammableStage {
    required GPUShaderModule module;
    USVString entryPoint;
    record<USVString, GPUPipelineConstantValue> constants = {};
};

typedef double GPUPipelineConstantValue; // 可表示 WGSL 的 bool、f32、i32、u32、以及启用时的 f16。

GPUProgrammableStage 具有以下成员:

module类型为 GPUShaderModule

包含本可编程阶段将要执行代码的 GPUShaderModule

entryPoint类型为 USVString

本阶段执行时将使用的 module 中的函数名。

注意: 虽然 entryPoint 不是必填项,消费 GPUProgrammableStage 的方法必须使用“获取入口点”算法确定实际入口点。

constants类型为 record<USVString, GPUPipelineConstantValue>,默认值为 {}

指定着色器模块 module可被管线重写(pipeline-overridable) 常量的值。

每个此类 可重写常量都唯一由一个 可重写常量标识符字符串标识, 代表其声明指定的 管线常量 ID,否则为常量名称。

每个键值对的 key 必须等于对应常量的 标识符字符串,其对比规则见WGSL 标识符对比。管线执行时,该常量将取指定值。

值类型为 GPUPipelineConstantValue,即 double。会转换为常量的 WGSL 类型(bool/i32/u32/f32/f16)。转换失败会生成校验错误。

WGSL 中定义的管线可重写常量:
@id(0)      override has_point_light: bool = true;  // 算法控制。
@id(1200)   override specular_param: f32 = 2.3;     // 数值控制。
@id(1300)   override gain: f32;                     // 必须重写。
            override width: f32 = 0.0;              // 通过 API 使用名字 "width" 指定
                                                    //   
            override depth: f32;                    // 通过 API 使用名字 "depth" 指定,必须重写。
            override height = 2 * depth;            // 默认值依赖另一个可重写常量。

对应的 JavaScript 代码,仅传递必须重写(无默认值)的常量:

{
    // ...
    constants: {
        1300: 2.0,  // "gain"
        depth: -1,  // "depth"
    }
}

对应的 JavaScript 代码,重写所有常量:

{
    // ...
    constants: {
        0: false,   // "has_point_light"
        1200: 3.0,  // "specular_param"
        1300: 2.0,  // "gain"
        width: 20,  // "width"
        depth: -1,  // "depth"
        height: 15, // "height"
    }
}
获取入口点(get the entry point)(GPUShaderStage stage, GPUProgrammableStage descriptor),请执行如下设备时间线步骤:
  1. 如果 descriptor.entryPoint 已提供

    1. 如果 descriptor.module 包含名称与 descriptor.entryPoint 相等且着色器阶段等于 stage 的入口点,则返回该入口点。

      否则,返回 null

    否则:

    1. 如果 descriptor.module 中着色器阶段等于 stage 的入口点恰好有一个,则返回该入口点。

      否则,返回 null

验证 GPUProgrammableStage(validating GPUProgrammableStage)(stage, descriptor, layout, device)

参数:

以下所有步骤中的要求都必须满足。若有任一不满足,则返回 false;全部满足则返回 true

  1. descriptor.module 必须 可与 device 配合使用。

  2. entryPoint = 获取入口点(stage, descriptor)。

  3. entryPoint 必须不为 null

  4. entryPoint 静态使用的每个 binding

  5. entryPoint 所在着色器阶段所有函数中的每个纹理内建函数调用,若同时使用了 sampled texturedepth texture 类型的 textureBinding 以及类型为 sampler(排除 sampler_comparison)的 samplerBinding

    1. texturetextureBinding 对应的 GPUBindGroupLayoutEntry

    2. samplersamplerBinding 对应的 GPUBindGroupLayoutEntry

    3. sampler.type"filtering", 则 texture.sampleType 必须"float"

    注意: "comparison" 采样器只能用于 "depth" 纹理,因为只有该类型能绑定到 WGSL texture_depth_* 绑定点。

  6. descriptor.constants 的每个 keyvalue

    1. key 必须等于着色器模块 descriptor.module 中某个 可重写常量(pipeline-overridable)标识符字符串(判断规则见 WGSL 标识符对比)。管线可重写常量无需被 entryPoint 静态使用。令该常量类型为 T

    2. 将 IDL 值 value 转换为 WGSL 类型 T不得抛出 TypeError

  7. entryPoint 静态使用的每个管线可重写常量 key

  8. 根据 [WGSL] 规范,不得产生 管线创建 程序错误

验证着色器绑定(validating shader binding)(variable, layout)

参数:

bindGroup 为绑定组索引,bindIndex 为绑定索引,均来自着色器绑定声明 variable

当且仅当下列所有条件满足时,返回 true

最小缓冲绑定大小(minimum buffer binding size)用于缓冲绑定变量 var,计算方式如下:
  1. Tvar存储类型(store type)

  2. T运行时尺寸数组,或包含运行时尺寸数组,则将该 array<E> 替换为 array<E, 1>

    注意: 这样确保至少有一个元素内存,允许数组索引被限制在实际长度范围内,从而保证内存访问安全。

  3. 返回 SizeOf(T)。

注意: 强制此下限可确保通过缓冲变量的读写只会访问缓冲区界限内的内存。

资源绑定、可重写管线常量、着色器阶段输入或输出, 若出现在相应入口点的着色器阶段接口中,则认为被该入口点静态使用(statically used)

10.2. GPUComputePipeline

GPUComputePipeline 是一种管线,控制计算着色器阶段, 并可用于 GPUComputePassEncoder

计算输入和输出均通过绑定传递,受指定的 GPUPipelineLayout 控制。 输出对应类型为 "storage"buffer 绑定, 以及类型为 "write-only""read-write"storageTexture 绑定。

计算管线的阶段:

  1. 计算着色器(Compute shader)

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

10.2.1. 计算管线的创建

GPUComputePipelineDescriptor 描述了一个计算管线。更多细节参见§ 23.1 计算

dictionary GPUComputePipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUProgrammableStage compute;
};

GPUComputePipelineDescriptor 具有以下成员:

compute类型为 GPUProgrammableStage

描述该管线的计算着色器入口点。

createComputePipeline(descriptor)

使用同步管线创建方式创建一个 GPUComputePipeline

调用对象: GPUDevice this.

参数:

GPUDevice.createComputePipeline(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPUComputePipelineDescriptor 要创建的 GPUComputePipeline 的描述信息。

返回: GPUComputePipeline

内容时间线步骤:

  1. pipeline = ! 创建新 WebGPU 对象(create a new WebGPU object)(this, GPUComputePipeline, descriptor)。

  2. this设备时间线上执行 初始化步骤

  3. 返回 pipeline

设备时间线 初始化步骤
  1. descriptor.layout"auto", 则令 layout默认管线布局; 否则,layout = descriptor.layout

  2. 以下所有步骤要求都必须满足。若有不满足,则生成校验错误使 pipeline 无效并返回。

    1. layout 必须可与 this 配合使用。

    2. 验证 GPUProgrammableStage(COMPUTE, descriptor.compute, layout, this) 必须成功。

    3. entryPoint = 获取入口点(COMPUTE, descriptor.compute)。

      断言 entryPoint 不为 null

    4. workgroupStorageUsedroundUp(16, SizeOf(T)) 的和, 其中 TentryPoint 静态使用的所有地址空间为 "workgroup" 的变量类型。

      workgroupStorageUsed 必须device.limits.maxComputeWorkgroupStorageSize

    5. entryPoint 必须每个工作组内使用的线程数 ≤ device.limits.maxComputeInvocationsPerWorkgroup

    6. entryPointworkgroup_size 属性的每个分量 必须 ≤ [device.limits.maxComputeWorkgroupSizeX, device.limits.maxComputeWorkgroupSizeY, device.limits.maxComputeWorkgroupSizeZ]。

  3. 如实现管线创建过程中产生了任何管线创建未分类错误生成内部错误使 pipeline 无效并返回。

    注意: 即便实现已在着色器模块创建时检测到未分类错误,也会在此处抛出。

  4. 设置 pipeline.[[layout]] = layout

createComputePipelineAsync(descriptor)

使用异步管线创建方式创建 GPUComputePipeline。 返回的 Promise 会在管线准备好可用时 resolve。

若管线创建失败,返回的 Promise 将 reject,并返回 GPUPipelineError。 (此时不会向 device 派发 GPUError。)

注意: 建议尽可能使用本方法,可避免阻塞队列时间线的管线编译工作。

调用对象: GPUDevice this.

参数:

GPUDevice.createComputePipelineAsync(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPUComputePipelineDescriptor 要创建的 GPUComputePipeline 的描述信息。

返回: Promise<GPUComputePipeline>

内容时间线步骤:

  1. contentTimeline 为当前内容时间线

  2. promise新 Promise

  3. this设备时间线上执行 初始化步骤

  4. 返回 promise

设备时间线 初始化步骤
  1. pipeline 为新建的 GPUComputePipeline, 类似于调用 this.createComputePipeline() 时传入 descriptor,但将产生的错误捕获为 error,而不是派发到 device。

  2. eventpipeline管线创建(无论成功与否)完成时发生。

  3. 监听时间线事件 eventthis.[[device]], 后续步骤于 设备时间线执行。

设备时间线步骤:
  1. pipeline 有效this 已丢失

    1. contentTimeline 上执行下列步骤:

      内容时间线步骤:
      1. resolve promisepipeline

    2. 返回。

    注意: 设备丢失不会产生错误。参见 § 22 错误与调试

  2. pipeline 无效error内部错误,在 contentTimeline 上执行下列步骤并返回:

    内容时间线步骤:
    1. reject promise,返回 GPUPipelineErrorreason"internal"

  3. pipeline 无效error校验错误,在 contentTimeline 上执行下列步骤并返回:

    内容时间线步骤:
    1. reject promise,返回 GPUPipelineErrorreason"validation"

创建一个简单的 GPUComputePipeline
const computePipeline = gpuDevice.createComputePipeline({
    layout: pipelineLayout,
    compute: {
        module: computeShaderModule,
        entryPoint: 'computeMain',
    }
});

10.3. GPURenderPipeline

GPURenderPipeline 是一种管线,控制顶点和片元着色器阶段,并可用于 GPURenderPassEncoder 以及 GPURenderBundleEncoder

渲染管线的输入包括:

渲染管线的输出包括:

渲染管线包含以下渲染阶段(render stages)

  1. 顶点抓取,由 GPUVertexState.buffers 控制

  2. 顶点着色器,由 GPUVertexState 控制

  3. 图元组装,由 GPUPrimitiveState 控制

  4. 光栅化,由 GPUPrimitiveStateGPUDepthStencilState、 以及 GPUMultisampleState 控制

  5. 片元着色器,由 GPUFragmentState 控制

  6. 模板测试和操作,由 GPUDepthStencilState 控制

  7. 深度测试与写入,由 GPUDepthStencilState 控制

  8. 输出合并,由 GPUFragmentState.targets 控制

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

GPURenderPipeline 具有以下设备时间线属性

[[descriptor]],类型为 GPURenderPipelineDescriptor, 只读

描述此管线的 GPURenderPipelineDescriptor

所有可选字段均已定义。

[[writesDepth]],类型为 boolean, 只读

如果管线会写入深度/模板附件的深度分量,则为 true。

[[writesStencil]],类型为 boolean, 只读

如果管线会写入深度/模板附件的模板分量,则为 true。

10.3.1. 渲染管线的创建

GPURenderPipelineDescriptor 通过配置各渲染阶段描述一个渲染管线。详见 § 23.2 渲染

dictionary GPURenderPipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUVertexState vertex;
    GPUPrimitiveState primitive = {};
    GPUDepthStencilState depthStencil;
    GPUMultisampleState multisample = {};
    GPUFragmentState fragment;
};

GPURenderPipelineDescriptor 具有以下成员:

vertex类型为 GPUVertexState

描述管线的顶点着色器入口点和其输入缓冲区布局。

primitive类型为 GPUPrimitiveState,默认值为 {}

描述管线的图元相关属性。

depthStencil类型为 GPUDepthStencilState

描述可选的深度-模板属性,包括测试、操作和偏移。

multisample类型为 GPUMultisampleState,默认值为 {}

描述管线的多重采样属性。

fragment类型为 GPUFragmentState

描述管线的片元着色器入口点及其输出颜色。如未提供,则启用§ 23.2.8 无颜色输出模式。

createRenderPipeline(descriptor)

使用同步管线创建方式创建 GPURenderPipeline

调用对象: GPUDevice this.

参数:

GPUDevice.createRenderPipeline(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPURenderPipelineDescriptor 要创建的 GPURenderPipeline 描述信息。

返回: GPURenderPipeline

内容时间线步骤:

  1. descriptor.fragment 已提供

    1. 遍历 descriptor.fragment.targets 的所有非 null colorState

      1. ? 校验纹理格式所需特性colorState.formatthis.[[device]]

  2. descriptor.depthStencil 已提供

    1. ? 校验纹理格式所需特性descriptor.depthStencil.formatthis.[[device]]

  3. pipeline = ! 创建新 WebGPU 对象(this, GPURenderPipeline, descriptor)。

  4. this设备时间线上执行 初始化步骤

  5. 返回 pipeline

设备时间线 初始化步骤
  1. descriptor.layout"auto", 则令 layout默认管线布局, 否则 layout = descriptor.layout

  2. 以下步骤要求全部必须满足。如有不满足,则生成校验错误使 pipeline 无效并返回。

    1. layout 必须可与 this 配合使用。

    2. 验证 GPURenderPipelineDescriptor(descriptor, layout, this) 必须成功。

    3. vertexBufferCountdescriptor.vertex.buffers 中最后一个非 null 项的索引加 1,如无则为 0。

    4. layout.[[bindGroupLayouts]].size + vertexBufferCount 必须 ≤ this.[[device]].[[limits]].maxBindGroupsPlusVertexBuffers

  3. 如实现管线创建过程中产生任何管线创建未分类错误生成内部错误使 pipeline 无效并返回。

    注意: 即使着色器模块创建时检测到未分类错误,错误也会在此处抛出。

  4. 设置 pipeline.[[descriptor]] = descriptor

  5. 设置 pipeline.[[writesDepth]] = false。

  6. 设置 pipeline.[[writesStencil]] = false。

  7. depthStencil = descriptor.depthStencil

  8. depthStencil 非 null:

    1. depthStencil.depthWriteEnabled 已提供

      1. 设置 pipeline.[[writesDepth]] = depthStencil.depthWriteEnabled

    2. depthStencil.stencilWriteMask 非 0:

      1. stencilFront = depthStencil.stencilFront

      2. stencilBack = depthStencil.stencilBack

      3. cullMode = descriptor.primitive.cullMode

      4. cullMode 不为 "front",且 stencilFront.passOpstencilFront.depthFailOpstencilFront.failOp 任意一项非 "keep"

        1. 设置 pipeline.[[writesStencil]] = true。

      5. cullMode 不为 "back",且 stencilBack.passOpstencilBack.depthFailOpstencilBack.failOp 任意一项非 "keep"

        1. 设置 pipeline.[[writesStencil]] = true。

  9. 设置 pipeline.[[layout]] = layout

createRenderPipelineAsync(descriptor)

使用异步管线创建方式创建 GPURenderPipeline。 返回的 Promise 会在管线准备好可用时 resolve。

若管线创建失败,返回的 Promise 将 reject,并返回 GPUPipelineError。 (不会向 device 派发 GPUError。)

注意: 建议尽量使用本方法,可避免阻塞队列时间线的管线编译工作。

调用对象: GPUDevice this.

参数:

GPUDevice.createRenderPipelineAsync(descriptor) 方法参数。
参数 类型 可为 null 可选 描述
descriptor GPURenderPipelineDescriptor 要创建的 GPURenderPipeline 描述信息。

返回: Promise<GPURenderPipeline>

内容时间线步骤:

  1. contentTimeline 为当前内容时间线

  2. promise = 新 Promise

  3. this设备时间线上执行 初始化步骤

  4. 返回 promise

设备时间线 初始化步骤
  1. pipeline 为新建的 GPURenderPipeline, 类似于调用 this.createRenderPipeline() 时传入 descriptor,但将产生的错误捕获为 error,而不是派发到 device。

  2. eventpipeline管线创建(无论成功与否)完成时发生。

  3. 监听时间线事件 eventthis.[[device]], 后续步骤于 设备时间线执行。

设备时间线步骤:
  1. pipeline 有效this 已丢失

    1. contentTimeline 上执行下列步骤:

      内容时间线步骤:
      1. resolve promisepipeline

    2. 返回。

    注意: 设备丢失不会产生错误。参见 § 22 错误与调试

  2. pipeline 无效error内部错误,在 contentTimeline 上执行下列步骤并返回:

    内容时间线步骤:
    1. reject promise,返回 GPUPipelineErrorreason"internal"

  3. pipeline 无效error校验错误,在 contentTimeline 上执行下列步骤并返回:

    内容时间线步骤:
    1. reject promise,返回 GPUPipelineErrorreason"validation"

验证 GPURenderPipelineDescriptor(validating GPURenderPipelineDescriptor)(descriptor, layout, device)

参数:

设备时间线步骤:

  1. 若下列所有条件均满足则返回 true

验证阶段间接口(validating inter-stage interfaces)(device, descriptor)

参数:

返回值: boolean

设备时间线步骤:

  1. maxVertexShaderOutputVariables = device.limits.maxInterStageShaderVariables

  2. maxVertexShaderOutputLocation = device.limits.maxInterStageShaderVariables - 1。

  3. descriptor.primitive.topology"point-list"

    1. maxVertexShaderOutputVariables 减 1。

  4. clip_distancesdescriptor.vertex 的输出中声明:

    1. clipDistancesSizeclip_distances 的数组大小。

    2. maxVertexShaderOutputVariables 减 ceil(clipDistancesSize / 4)。

    3. maxVertexShaderOutputLocation 减 ceil(clipDistancesSize / 4)。

  5. 如下要求有任意一项不满足则返回 false

    • 用户自定义顶点输出数量不超过 maxVertexShaderOutputVariables

    • 每个用户自定义顶点输出的locationmaxVertexShaderOutputLocation

  6. descriptor.fragment 已提供

    1. maxFragmentShaderInputVariables = device.limits.maxInterStageShaderVariables

    2. front_facingsample_indexsample_mask 内建变量descriptor.fragment 的输入:

      1. maxFragmentShaderInputVariables 减 1。

    3. 如有下列任一项不满足则返回 false

      • descriptor.fragment 的每个用户自定义输入,均有 descriptor.vertex 中用户自定义输出在 location、类型和插值方式上对应。

        注意: 顶点-only 管线可有用户自定义顶点输出,其值会被丢弃。

      • 用户自定义片元输入数量不超过 maxFragmentShaderInputVariables

    4. 断言每个用户自定义片元输入的location均小于 device.limits.maxInterStageShaderVariables。(由上规则推得)

  7. 返回 true

创建一个简单的 GPURenderPipeline
const renderPipeline = gpuDevice.createRenderPipeline({
    layout: pipelineLayout,
    vertex: {
        module: shaderModule,
        entryPoint: 'vertexMain'
    },
    fragment: {
        module: shaderModule,
        entryPoint: 'fragmentMain',
        targets: [{
            format: 'bgra8unorm',
        }],
    }
});

10.3.2. 图元状态(Primitive State)

dictionary GPUPrimitiveState {
    GPUPrimitiveTopology topology = "triangle-list";
    GPUIndexFormat stripIndexFormat;
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    // 需要 "depth-clip-control" 特性。
    boolean unclippedDepth = false;
};

GPUPrimitiveState 拥有以下成员,描述 GPURenderPipeline 如何根据顶点输入构造和光栅化图元:

topology类型为 GPUPrimitiveTopology,默认值为 "triangle-list"

由顶点输入生成的图元类型。

stripIndexFormat 类型为 GPUIndexFormat

用于带 strip 拓扑 ("line-strip""triangle-strip") 的管线, 决定索引缓冲区格式和图元重启值 ("uint16"/0xFFFF"uint32"/0xFFFFFFFF)。 非 strip 拓扑管线不允许设置此项。

注意: 某些实现需要提前知道图元重启值,以编译管线状态对象。

若 strip 拓扑管线用于带索引的绘制调用 (drawIndexed()drawIndexedIndirect()), 则必须设置该值,并且要与 draw 调用所用索引缓冲区格式(setIndexBuffer() 设置)一致。

详见 § 23.2.3 图元组装

frontFace类型为 GPUFrontFace,默认值为 "ccw"

定义哪些多边形被视为正面(front-facing)

cullMode类型为 GPUCullMode,默认值为 "none"

定义哪些多边形朝向会被裁剪(cull),如有的话。

unclippedDepth 类型为 boolean,默认值为 false

如为 true,表示深度裁剪(depth clipping)被禁用。

需要启用 "depth-clip-control" 特性。

验证 GPUPrimitiveState(validating GPUPrimitiveState)(descriptor, device) 参数:

设备时间线步骤:

  1. 如果下列所有条件都满足则返回 true

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip",
};

GPUPrimitiveTopology 定义了 GPURenderPipeline 的绘制调用会使用的图元类型。详见 § 23.2.5 光栅化

"point-list"

每个顶点定义一个点图元。

"line-list"

每两个连续顶点组成一个线段图元。

"line-strip"

第一个顶点后,每个顶点与前一个顶点组成一个线段图元。

"triangle-list"

每三个连续顶点组成一个三角形图元。

"triangle-strip"

前两个顶点后,每个顶点与前两个顶点组成一个三角形图元。

enum GPUFrontFace {
    "ccw",
    "cw",
};

GPUFrontFace 定义 GPURenderPipeline 视为正面(front-facing)的多边形。详见 § 23.2.5.4 多边形光栅化

"ccw"

顶点在帧缓区坐标中按逆时针顺序排列的多边形被视为正面

"cw"

顶点在帧缓区坐标中按顺时针顺序排列的多边形被视为正面

enum GPUCullMode {
    "none",
    "front",
    "back",
};

GPUPrimitiveTopology 定义了哪些多边形会被 GPURenderPipeline 的绘制调用裁剪。详见 § 23.2.5.4 多边形光栅化

"none"

不丢弃任何多边形。

"front"

正面多边形将被丢弃。

"back"

背面多边形将被丢弃。

注意: GPUFrontFaceGPUCullMode 对于 "point-list""line-list""line-strip" 拓扑无效。

10.3.3. 多重采样状态(Multisample State)

dictionary GPUMultisampleState {
    GPUSize32 count = 1;
    GPUSampleMask mask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};

GPUMultisampleState 拥有以下成员,描述 GPURenderPipeline 如何与渲染通道的多重采样附件进行交互。

count类型为 GPUSize32,默认值为 1

每像素的采样数。该 GPURenderPipeline 仅与 colorAttachmentsdepthStencilAttachmentsampleCount 匹配的附件纹理兼容。

mask类型为 GPUSampleMask,默认值 0xFFFFFFFF

决定哪些采样点会被写入的掩码。

alphaToCoverageEnabled类型为 boolean,默认值 false

true 时,表示片元的 alpha 通道用于生成采样覆盖掩码。

验证 GPUMultisampleState(validating GPUMultisampleState)(descriptor) 参数:

设备时间线步骤:

  1. 若下列所有条件均满足则返回 true

10.3.4. 片元状态(Fragment State)

dictionary GPUFragmentState
         : GPUProgrammableStage {
    required sequence<GPUColorTargetState?> targets;
};
targets类型为 sequence<GPUColorTargetState?>

一个 GPUColorTargetState 列表,定义本管线写入的颜色目标的格式和行为。

验证 GPUFragmentState(validating GPUFragmentState)(device, descriptor, layout)

参数:

设备时间线步骤:

  1. 若下列所有要求都满足则返回 true

验证 GPUFragmentState 的 color attachment bytes per sample(Validating GPUFragmentState’s color attachment bytes per sample)(device, targets)

参数:

设备时间线步骤:

  1. formats 为一个空的 list<GPUTextureFormat?>

  2. 遍历 targets 中的每个 target

    1. targetundefined,则跳过。

    2. 追加 target.formatformats

  3. 计算 color attachment bytes per sample(formats) 必须 ≤ device.[[limits]].maxColorAttachmentBytesPerSample

注意: 片元着色器可输出的值可能多于管线实际使用的颜色输出,未被使用的值会被忽略。

GPUBlendComponent component 是一个 有效 GPUBlendComponent(valid GPUBlendComponent),在逻辑 device device 上,需满足以下要求:

10.3.5. 颜色目标状态(Color Target State)

dictionary GPUColorTargetState {
    required GPUTextureFormat format;

    GPUBlendState blend;
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};
format类型为 GPUTextureFormat

该颜色目标的 GPUTextureFormat。管线只与 在对应颜色附件中使用该格式的 GPURenderPassEncoderGPUTextureView 兼容。

blend类型为 GPUBlendState

该颜色目标的混合行为。如未定义,则禁用该颜色目标的混合。

writeMask类型为 GPUColorWriteFlags,默认值 0xF

控制绘制到该颜色目标时写入哪些通道的位掩码。

dictionary GPUBlendState {
    required GPUBlendComponent color;
    required GPUBlendComponent alpha;
};
color类型为 GPUBlendComponent

定义对应渲染目标颜色通道的混合行为。

alpha类型为 GPUBlendComponent

定义对应渲染目标 alpha 通道的混合行为。

typedef [EnforceRange] unsigned long GPUColorWriteFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUColorWrite {
    const GPUFlagsConstant RED   = 0x1;
    const GPUFlagsConstant GREEN = 0x2;
    const GPUFlagsConstant BLUE  = 0x4;
    const GPUFlagsConstant ALPHA = 0x8;
    const GPUFlagsConstant ALL   = 0xF;
};
10.3.5.1. 混合状态(Blend State)
dictionary GPUBlendComponent {
    GPUBlendOperation operation = "add";
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
};

GPUBlendComponent 拥有以下成员,描述片元的颜色或 alpha 分量是如何混合的:

operation类型为 GPUBlendOperation,默认值为 "add"

定义用于计算写入目标附件分量值的 GPUBlendOperation

srcFactor类型为 GPUBlendFactor,默认值为 "one"

定义对来自片元着色器的值执行的 GPUBlendFactor 操作。

dstFactor类型为 GPUBlendFactor,默认值为 "zero"

定义对来自目标附件的值执行的 GPUBlendFactor 操作。

下表说明了用于描述片元位置的颜色分量的符号:

RGBAsrc 片元着色器输出到颜色附件的颜色。 如果着色器没有返回 alpha 通道,则不能使用 src-alpha 混合因子。
RGBAsrc1 片元着色器输出到带有 "@blend_src" 属性1 的颜色附件的颜色。 如果着色器没有返回 alpha 通道,则不能使用 src1-alpha 混合因子。
RGBAdst 当前颜色附件中的颜色。 缺失的绿色/蓝色/alpha 通道分别默认 0, 0, 1
RGBAconst 当前 [[blendConstant]]
RGBAsrcFactor srcFactor 定义的源混合因子分量。
RGBAdstFactor dstFactor 定义的目标混合因子分量。
enum GPUBlendFactor {
    "zero",
    "one",
    "src",
    "one-minus-src",
    "src-alpha",
    "one-minus-src-alpha",
    "dst",
    "one-minus-dst",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "constant",
    "one-minus-constant",
    "src1",
    "one-minus-src1",
    "src1-alpha",
    "one-minus-src1-alpha",
};

GPUBlendFactor 定义了源或目标混合因子的计算方式:

GPUBlendFactor 混合因子 RGBA 分量 特性
"zero" (0, 0, 0, 0)
"one" (1, 1, 1, 1)
"src" (Rsrc, Gsrc, Bsrc, Asrc)
"one-minus-src" (1 - Rsrc, 1 - Gsrc, 1 - Bsrc, 1 - Asrc)
"src-alpha" (Asrc, Asrc, Asrc, Asrc)
"one-minus-src-alpha" (1 - Asrc, 1 - Asrc, 1 - Asrc, 1 - Asrc)
"dst" (Rdst, Gdst, Bdst, Adst)
"one-minus-dst" (1 - Rdst, 1 - Gdst, 1 - Bdst, 1 - Adst)
"dst-alpha" (Adst, Adst, Adst, Adst)
"one-minus-dst-alpha" (1 - Adst, 1 - Adst, 1 - Adst, 1 - Adst)
"src-alpha-saturated" (min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), 1)
"constant" (Rconst, Gconst, Bconst, Aconst)
"one-minus-constant" (1 - Rconst, 1 - Gconst, 1 - Bconst, 1 - Aconst)
"src1" (Rsrc1, Gsrc1, Bsrc1, Asrc1) dual-source-blending
"one-minus-src1" (1 - Rsrc1, 1 - Gsrc1, 1 - Bsrc1, 1 - Asrc1)
"src1-alpha" (Asrc1, Asrc1, Asrc1, Asrc1)
"one-minus-src1-alpha" (1 - Asrc1, 1 - Asrc1, 1 - Asrc1, 1 - Asrc1)
enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max",
};

GPUBlendOperation 定义了用于组合源和目标混合因子的算法:

GPUBlendOperation RGBA 分量
"add" RGBAsrc × RGBAsrcFactor + RGBAdst × RGBAdstFactor
"subtract" RGBAsrc × RGBAsrcFactor - RGBAdst × RGBAdstFactor
"reverse-subtract" RGBAdst × RGBAdstFactor - RGBAsrc × RGBAsrcFactor
"min" min(RGBAsrc, RGBAdst)
"max" max(RGBAsrc, RGBAdst)

10.3.6. 深度/模板状态(Depth/Stencil State)

dictionary GPUDepthStencilState {
    required GPUTextureFormat format;

    boolean depthWriteEnabled;
    GPUCompareFunction depthCompare;

    GPUStencilFaceState stencilFront = {};
    GPUStencilFaceState stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};

GPUDepthStencilState 拥有以下成员,描述 GPURenderPipeline 如何影响渲染通道的 depthStencilAttachment

format类型为 GPUTextureFormat

GPURenderPipeline 可兼容的 depthStencilAttachmentformat

depthWriteEnabled类型为 boolean

指示该 GPURenderPipeline 是否可以修改 depthStencilAttachment 的深度值。

depthCompare类型为 GPUCompareFunction

用于将片元深度与 depthStencilAttachment 深度值进行比较的操作。

stencilFront类型为 GPUStencilFaceState,默认值为 {}

定义前面图元的模板比较和操作方式。

stencilBack类型为 GPUStencilFaceState,默认值为 {}

定义背面图元的模板比较和操作方式。

stencilReadMask类型为 GPUStencilValue,默认值 0xFFFFFFFF

控制执行模板测试时读取 depthStencilAttachment 模板值哪些位的位掩码。

stencilWriteMask类型为 GPUStencilValue,默认值 0xFFFFFFFF

控制执行模板操作时写入 depthStencilAttachment 模板值哪些位的位掩码。

depthBias类型为 GPUDepthBias,默认值 0

加到每个三角形片元的恒定深度偏移。详见 带偏移的片元深度(biased fragment depth)

depthBiasSlopeScale类型为 float,默认值 0

随三角形片元斜率变化的深度偏移。详见 带偏移的片元深度(biased fragment depth)

depthBiasClamp类型为 float,默认值 0

三角形片元的最大深度偏移。详见 带偏移的片元深度(biased fragment depth)

注意: depthBiasdepthBiasSlopeScaledepthBiasClamp"point-list""line-list""line-strip" 图元无效,且必须为 0。

带偏移的片元深度(biased fragment depth) 指的是对使用 GPUDepthStencilState state 绘制时写入 depthStencilAttachment attachment 的片元,计算方式如下(队列时间线步骤):
  1. format = attachment.view.format

  2. rformat 转为 32 位 float 后大于 0 的最小正可表示值。

  3. maxDepthSlope 为片元深度值的水平和垂直斜率中的最大值。

  4. formatunorm 格式:

    1. bias(float)state.depthBias * r + state.depthBiasSlopeScale * maxDepthSlope

  5. 否则,如 formatfloat 格式:

    1. bias(float)state.depthBias * 2^(exp(primitive 最大深度) - r) + state.depthBiasSlopeScale * maxDepthSlope

  6. state.depthBiasClamp > 0

    1. bias 设为 min(state.depthBiasClamp, bias)

  7. 否则如 state.depthBiasClamp < 0

    1. bias 设为 max(state.depthBiasClamp, bias)

  8. state.depthBias0state.depthBiasSlopeScale0

    1. 将片元深度值设为 片元深度值 + bias

验证 GPUDepthStencilState(validating GPUDepthStencilState)(descriptor, topology)

参数:

设备时间线步骤:

  1. 当且仅当下列所有条件都满足时返回 true

dictionary GPUStencilFaceState {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

GPUStencilFaceState 拥有以下成员,描述模板比较和操作的方式:

compare类型为 GPUCompareFunction,默认值为 "always"

在用 [[stencilReference]] 的值与片元 depthStencilAttachment 的模板值进行测试时使用的 GPUCompareFunction

failOp类型为 GPUStencilOperation,默认值为 "keep"

如片元模板比较测试(由 compare 描述)失败,则执行的 GPUStencilOperation 操作。

depthFailOp类型为 GPUStencilOperation,默认值为 "keep"

如片元深度比较(由 depthCompare 描述)失败,则执行的 GPUStencilOperation 操作。

passOp类型为 GPUStencilOperation,默认值为 "keep"

如片元模板比较测试(由 compare 描述)通过,则执行的 GPUStencilOperation 操作。

enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap",
};

GPUStencilOperation 定义了以下操作:

"keep"

保持当前模板值。

"zero"

将模板值设为 0

"replace"

将模板值设为 [[stencilReference]]

"invert"

对当前模板值取按位非。

"increment-clamp"

将当前模板值加一,并钳制到 depthStencilAttachment 模板分量的最大可表示值。

"decrement-clamp"

将当前模板值减一,并钳制到 0

"increment-wrap"

将当前模板值加一,如超过 depthStencilAttachment 模板分量最大可表示值,则回绕为 0。

"decrement-wrap"

将当前模板值减一,如小于 0,则回绕为 depthStencilAttachment 模板分量最大可表示值。

10.3.7. 顶点状态(Vertex State)

enum GPUIndexFormat {
    "uint16",
    "uint32",
};

索引格式(index format)决定了缓冲区中索引值的数据类型,并且在用于 strip 图元拓扑("line-strip""triangle-strip")时,也指定了图元重启值(primitive restart value)。图元重启值(primitive restart value) 表示哪个索引值意味着应当新建一个图元,而不是继续用先前的索引顶点构建三角带。

GPUPrimitiveState 指定 strip 图元拓扑时,若用于带索引绘制,必须指定 stripIndexFormat, 以便在管线创建时就明确将使用的图元重启值。 指定 list 图元拓扑的 GPUPrimitiveState 在索引渲染时会使用 setIndexBuffer() 传递的索引格式。

索引格式(Index format) 字节数(Byte size) 图元重启值(Primitive restart value)
"uint16" 2 0xFFFF
"uint32" 4 0xFFFFFFFF
10.3.7.1. 顶点格式(Vertex Formats)

GPUVertexFormat 指定顶点属性的数据格式,决定如何从顶点缓冲区读取数据并传递到着色器。格式名表明了分量的顺序、每个分量的比特数,以及该分量的顶点数据类型

每种顶点数据类型(vertex data type) 都可以映射到任意同基类型的WGSL 标量类型,无论每个分量有多少位:

顶点格式前缀(Vertex format prefix) 顶点数据类型(Vertex data type) 兼容 WGSL 类型(Compatible WGSL types)
uint unsigned int(无符号整数) u32
sint signed int(有符号整数) i32
unorm unsigned normalized(无符号归一化) f16, f32
snorm signed normalized(有符号归一化)
float floating point(浮点数)

多分量格式的“x”后面的数字表示分量数量。顶点格式和着色器类型的分量数不一致是允许的,多余分量会被丢弃,不足分量则用默认值补齐。

一个顶点属性,格式为 "unorm8x2" ,字节值为 [0x7F, 0xFF] ,可在着色器中通过如下类型访问:
着色器类型(Shader type) 着色器值(Shader value)
f16 0.5h
f32 0.5f
vec2<f16> vec2(0.5h, 1.0h)
vec2<f32> vec2(0.5f, 1.0f)
vec3<f16> vec2(0.5h, 1.0h, 0.0h)
vec3<f32> vec2(0.5f, 1.0f, 0.0f)
vec4<f16> vec2(0.5h, 1.0h, 0.0h, 1.0h)
vec4<f32> vec2(0.5f, 1.0f, 0.0f, 1.0f)

关于顶点格式如何在着色器中暴露,详见 § 23.2.2 顶点处理

enum GPUVertexFormat {
    "uint8",
    "uint8x2",
    "uint8x4",
    "sint8",
    "sint8x2",
    "sint8x4",
    "unorm8",
    "unorm8x2",
    "unorm8x4",
    "snorm8",
    "snorm8x2",
    "snorm8x4",
    "uint16",
    "uint16x2",
    "uint16x4",
    "sint16",
    "sint16x2",
    "sint16x4",
    "unorm16",
    "unorm16x2",
    "unorm16x4",
    "snorm16",
    "snorm16x2",
    "snorm16x4",
    "float16",
    "float16x2",
    "float16x4",
    "float32",
    "float32x2",
    "float32x3",
    "float32x4",
    "uint32",
    "uint32x2",
    "uint32x3",
    "uint32x4",
    "sint32",
    "sint32x2",
    "sint32x3",
    "sint32x4",
    "unorm10-10-10-2",
    "unorm8x4-bgra",
};
顶点格式(Vertex format) 数据类型(Data type) 分量数(Components) 字节数(byteSize) 示例 WGSL 类型(Example WGSL type)
"uint8" 无符号整数(unsigned int) 1 1 u32
"uint8x2" 无符号整数 2 2 vec2<u32>
"uint8x4" 无符号整数 4 4 vec4<u32>
"sint8" 有符号整数(signed int) 1 1 i32
"sint8x2" 有符号整数 2 2 vec2<i32>
"sint8x4" 有符号整数 4 4 vec4<i32>
"unorm8" 无符号归一化(unsigned normalized) 1 1 f32
"unorm8x2" 无符号归一化 2 2 vec2<f32>
"unorm8x4" 无符号归一化 4 4 vec4<f32>
"snorm8" 有符号归一化(signed normalized) 1 1 f32
"snorm8x2" 有符号归一化 2 2 vec2<f32>
"snorm8x4" 有符号归一化 4 4 vec4<f32>
"uint16" 无符号整数 1 2 u32
"uint16x2" 无符号整数 2 4 vec2<u32>
"uint16x4" 无符号整数 4 8 vec4<u32>
"sint16" 有符号整数 1 2 i32
"sint16x2" 有符号整数 2 4 vec2<i32>
"sint16x4" 有符号整数 4 8 vec4<i32>
"unorm16" 无符号归一化 1 2 f32
"unorm16x2" 无符号归一化 2 4 vec2<f32>
"unorm16x4" 无符号归一化 4 8 vec4<f32>
"snorm16" 有符号归一化 1 2 f32
"snorm16x2" 有符号归一化 2 4 vec2<f32>
"snorm16x4" 有符号归一化 4 8 vec4<f32>
"float16" 浮点数(float) 1 2 f32
"float16x2" 浮点数 2 4 vec2<f16>
"float16x4" 浮点数 4 8 vec4<f16>
"float32" 浮点数 1 4 f32
"float32x2" 浮点数 2 8 vec2<f32>
"float32x3" 浮点数 3 12 vec3<f32>
"float32x4" 浮点数 4 16 vec4<f32>
"uint32" 无符号整数 1 4 u32
"uint32x2" 无符号整数 2 8 vec2<u32>
"uint32x3" 无符号整数 3 12 vec3<u32>
"uint32x4" 无符号整数 4 16 vec4<u32>
"sint32" 有符号整数 1 4 i32
"sint32x2" 有符号整数 2 8 vec2<i32>
"sint32x3" 有符号整数 3 12 vec3<i32>
"sint32x4" 有符号整数 4 16 vec4<i32>
"unorm10-10-10-2" 无符号归一化 4 4 vec4<f32>
"unorm8x4-bgra" 无符号归一化 4 4 vec4<f32>
enum GPUVertexStepMode {
    "vertex",
    "instance",
};

步进模式(step mode)配置了如何根据当前的顶点索引或实例索引计算顶点缓冲区数据的地址:

"vertex"

每个顶点地址按 arrayStride 递增, 在不同实例之间重置。

"instance"

每个实例地址按 arrayStride 递增。

dictionary GPUVertexState
     : GPUProgrammableStage {
sequence<GPUVertexBufferLayout?> buffers = [];
};
buffers类型为 sequence<GPUVertexBufferLayout?>,默认值为 []

一个 GPUVertexBufferLayout 列表,定义了本管线所用顶点缓冲区中顶点属性数据的布局。

顶点缓冲区(vertex buffer)在概念上是对缓冲区内存的视图,把它看作“结构体数组”。arrayStride 是该数组中每个元素(结构)之间的字节跨度。每个顶点缓冲区的元素就像一个结构体,其内存布局由 attributes 定义,attributes 描述了结构体成员。

每个 GPUVertexAttribute 描述了该成员的 format 以及在结构体中的 offset 字节偏移量。

每个 attribute 在顶点着色器中表现为一个独立输入,绑定一个数值类型的 location,由 shaderLocation 指定。每个 location 在 GPUVertexState 内必须唯一。

dictionary GPUVertexBufferLayout {
    required GPUSize64 arrayStride;
    GPUVertexStepMode stepMode = "vertex";
    required sequence<GPUVertexAttribute> attributes;
};
arrayStride类型为 GPUSize64

该数组元素之间的字节跨度。

stepMode类型为 GPUVertexStepMode,默认值为 "vertex"

该数组每个元素是按顶点还是按实例计(per-vertex 或 per-instance)。

attributes类型为 sequence<GPUVertexAttribute>

定义每个元素内各顶点属性(结构体成员)的数组。

dictionary GPUVertexAttribute {
    required GPUVertexFormat format;
    required GPUSize64 offset;
    required GPUIndex32 shaderLocation;
};
format类型为 GPUVertexFormat

该属性的 GPUVertexFormat

offset类型为 GPUSize64

从该结构体起始到该属性数据的字节偏移量。

shaderLocation类型为 GPUIndex32

该属性关联的数值 location,需与 "@location" 属性vertex.module 中对应。

验证 GPUVertexBufferLayout(validating GPUVertexBufferLayout)(device, descriptor)

参数:

设备时间线步骤:

  1. 当且仅当下列所有条件都满足时,返回 true

验证 GPUVertexState(validating GPUVertexState)(device, descriptor, layout)

参数:

设备时间线步骤:

  1. entryPoint = 获取入口点(VERTEX, descriptor)。

  2. 断言(Assert) entryPoint 不为 null

  3. 以下所有要求 必须满足:

    1. 验证 GPUProgrammableStage(VERTEX, descriptor, layout, device) 必须成功。

    2. descriptor.buffers.size 必须device.[[device]].[[limits]].maxVertexBuffers

    3. 列表 descriptor.buffers 中的每个 vertexBuffer 描述符 必须通过 验证 GPUVertexBufferLayout(device, vertexBuffer)。

    4. 所有 vertexBuffer.attributes.size 之和, 必须 ≤ device.[[device]].[[limits]].maxVertexAttributes

    5. 对于 entryPoint 静态使用的每个顶点属性声明(位置 location,类型 T), 必须存在且仅存在一组 (i, j) 使得 descriptor.buffers[i]? .attributes[j] .shaderLocation == location

      attrib 为该 GPUVertexAttribute

    6. T 必须attrib.format顶点数据类型(vertex data type)兼容:

      "unorm"、"snorm" 或 "float"

      T 必须是 f32vecN<f32>

      "uint"

      T 必须是 u32vecN<u32>

      "sint"

      T 必须是 i32vecN<i32>

11. 复制(Copies)

11.1. 缓冲区复制(Buffer Copies)

缓冲区复制操作基于原始字节执行。

WebGPU 提供了“缓冲式” GPUCommandEncoder 命令:

以及“即时” GPUQueue 操作:

11.2. 纹素复制(Texel Copies)

纹素复制(Texel copy)操作针对的是纹理/图像数据,而不是字节。

WebGPU 提供“缓冲式” GPUCommandEncoder 命令:

以及“即时” GPUQueue 操作:

在纹素复制过程中,纹素会以等价纹素表示(equivalent texel representation)被复制。 纹素复制仅保证源中的有效、正常数字值在目标中具有相同的数值,但可能不会保留以下值的位表示:

注意: 复制操作可能通过 WGSL 着色器实现,所以可能会观察到任何文档中描述的 WGSL 浮点行为

下列定义被这些方法所用:

11.2.1. GPUTexelCopyBufferLayout

"GPUTexelCopyBufferLayout" 描述了在“缓冲区”字节(GPUBufferAllowSharedBufferSource) 中的纹素在“纹素复制”操作中的“布局”。

dictionary GPUTexelCopyBufferLayout {
    GPUSize64 offset = 0;
    GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage;
};

纹素图像(texel image)由一行或多行纹素块组成,这里称为纹素块行(texel block row)。每个texel imagetexel block row数量相同,且所有纹素块的GPUTextureFormat一致。

GPUTexelCopyBufferLayout 描述了线性内存中多个texel image的布局。当在textureGPUBuffer之间复制数据,或通过GPUQueue写入texture时会用到。

在字节数组与纹理之间的复制操作总是以整个 纹素块为单位。无法只更新一个纹素块的一部分。

纹素块在每个texel block row中在内存中紧密排列,每个相邻的纹素块之间无填充,包括对深度或模板纹理特定 aspect 的复制:模板值以字节数组紧密排列,深度值以相应类型("depth16unorm" 或 "depth32float")数组紧密排列。

offset类型为 GPUSize64,默认值为 0

从纹素数据源(如 GPUTexelCopyBufferInfo.buffer) 起始到纹素数据起始位置的字节偏移量。

bytesPerRow类型为 GPUSize32

每个 纹素块行 起始到下一个 纹素块行 起始的字节跨度。

若存在多个 纹素块行(即复制高度或深度大于一个块)时为必填项。

rowsPerImage类型为 GPUSize32

每个 纹素图像 包含的 纹素块行数量。 rowsPerImage × bytesPerRow 即为每个 纹素图像 起始到下一个 纹素图像 起始的字节跨度。

若存在多个 纹素图像(即复制深度大于 1)时为必填项。

11.2.2. GPUTexelCopyBufferInfo

"GPUTexelCopyBufferInfo" 描述了作为“缓冲区”源或目标的 GPUBufferGPUTexelCopyBufferLayout 的“参数信息”,用于“纹素复制”操作。 它与 copySize 一起描述了 GPUBuffer 中某一区域纹素的内存布局。

dictionary GPUTexelCopyBufferInfo
         : GPUTexelCopyBufferLayout {
    required GPUBuffer buffer;
};
buffer类型为 GPUBuffer

根据所调用的方法,该缓冲区要么包含要被复制的纹素数据,要么用于存储被复制的纹素数据。

验证 GPUTexelCopyBufferInfo(validating GPUTexelCopyBufferInfo)

参数:

返回: boolean

设备时间线步骤:

  1. 当且仅当下列所有条件都满足时返回 true

11.2.3. GPUTexelCopyTextureInfo

"GPUTexelCopyTextureInfo" 描述了作为“纹理”源或目标的 GPUTexture 等对象在“纹素复制”操作中的“参数信息”。 它和 copySize 一起描述了纹理的一个子区域(跨越同一 mip-map 层级的一个或多个连续纹理子资源)。

dictionary GPUTexelCopyTextureInfo {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
    GPUTextureAspect aspect = "all";
};
texture类型为 GPUTexture

要复制到或从其复制的纹理。

mipLevel类型为 GPUIntegerCoordinate,默认值为 0

要复制到或从其复制的 texture 的 mip-map 层级。

origin类型为 GPUOrigin3D,默认值为 {}

定义复制的起点——要复制到或从其复制的纹理子区域的最小角落。与 copySize 一起,定义完整的复制子区域。

aspect类型为 GPUTextureAspect,默认值为 "all"

定义要复制到/从其复制的 texture 的哪些 aspect。

纹理复制子区域(texture copy sub-region),对于 GPUTexelCopyTextureInfo copyTexture 的深度切片或数组层 index,按如下步骤确定:
  1. texture = copyTexture.texture

  2. 如果 texture.dimension 为:

    1d
    1. 断言 index0

    2. depthSliceOrLayertexture

    2d

    depthSliceOrLayertexture 的 array layer index

    3d

    depthSliceOrLayertexture 的 depth slice index

  3. textureMipdepthSliceOrLayer 的 mip 级别 copyTexture.mipLevel

  4. 返回 textureMipcopyTexture.aspect aspect。

纹素块字节偏移量(texel block byte offset),对于由 GPUTexelCopyBufferLayout bufferLayout 描述的数据,纹素块 xyGPUTexture texture 的深度切片或数组层 z,按如下步骤确定:
  1. blockBytestexture.format纹素块复制脚印

  2. imageOffset = (z × bufferLayout.rowsPerImage × bufferLayout.bytesPerRow) + bufferLayout.offset

  3. rowOffset = (y × bufferLayout.bytesPerRow) + imageOffset

  4. blockOffset = (x × blockBytes) + rowOffset

  5. 返回 blockOffset

验证 GPUTexelCopyTextureInfo(validating GPUTexelCopyTextureInfo)(texelCopyTextureInfo, copySize)

参数:

返回: boolean

设备时间线步骤:

  1. blockWidthtexelCopyTextureInfo.texture.format纹素块宽度

  2. blockHeighttexelCopyTextureInfo.texture.format纹素块高度

  3. 当且仅当下列所有条件满足时返回 true

验证纹理缓冲区复制(validating texture buffer copy)(texelCopyTextureInfo, bufferLayout, dataLength, copySize, textureUsage, aligned)

参数:

返回: boolean

设备时间线步骤:

  1. texture = texelCopyTextureInfo.texture

  2. aspectSpecificFormat = texture.format

  3. offsetAlignment = texture.format纹素块复制脚印

  4. 当且仅当下列所有条件满足时返回 true

    1. 验证 GPUTexelCopyTextureInfo(texelCopyTextureInfo, copySize) 返回 true

    2. texture.sampleCount 为 1。

    3. texture.usage 包含 textureUsage

    4. 如果 texture.format深度或模板格式

      1. texelCopyTextureInfo.aspect 必须指定 texture.format 的单一 aspect。

      2. 如果 textureUsage 是:

        COPY_SRC

        该 aspect 必须根据 § 26.1.2 深度模板格式 是有效的纹素复制源。

        COPY_DST

        该 aspect 必须根据 § 26.1.2 深度模板格式 是有效的纹素复制目标。

      3. 根据 § 26.1.2 深度模板格式,将 aspectSpecificFormat 设为aspect-specific format

      4. offsetAlignment 设为 4。

    5. 如果 alignedtrue

      1. bufferLayout.offset 必须是 offsetAlignment 的倍数。

    6. 验证线性纹理数据(bufferLayout, dataLength, aspectSpecificFormat, copySize) 成功。

11.2.4. GPUCopyExternalImageDestInfo

WebGPU 纹理保存的是原始数值数据,并没有附带描述颜色的语义元数据。 然而,copyExternalImageToTexture() 复制自带有颜色描述信息的源。

"GPUCopyExternalImageDestInfo" 描述了 "copyExternalImageToTexture()" 操作中 "目标(dest)" 的 "参数信息(info)"。 它是一个 GPUTexelCopyTextureInfo,但额外带有色彩空间/编码和 alpha 预乘元数据,以便在复制过程中保留语义色彩数据。 这些元数据仅影响复制操作的语义,不影响目标纹理对象本身的状态或语义。

dictionary GPUCopyExternalImageDestInfo
         : GPUTexelCopyTextureInfo {
    PredefinedColorSpace colorSpace = "srgb";
    boolean premultipliedAlpha = false;
};
colorSpace类型为 PredefinedColorSpace,默认值为 "srgb"

描述写入目标纹理时使用的色彩空间和编码。

可能导致超出 [0, 1] 范围的值被写入目标纹理(如果其格式可以表示这些值)。 否则,结果会被裁剪到目标纹理格式的允许范围内。

注意: 如果 colorSpace 与源图像一致,则可能无需进行转换。详见 § 3.10.2 色彩空间转换省略

premultipliedAlpha类型为 boolean,默认值为 false

描述写入目标纹理时,RGB 通道是否应由 alpha 通道进行预乘。

如果此选项为 true,且 source 也为预乘,则即使 RGB 分量超过其对应的 alpha 分量,也必须保留源 RGB 值。

注意: 如果 premultipliedAlpha 与源图像一致,则可能无需进行转换。详见 § 3.10.2 色彩空间转换省略

11.2.5. GPUCopyExternalImageSourceInfo

"GPUCopyExternalImageSourceInfo" 描述了 "copyExternalImageToTexture()" 操作中 "源(source)" 的 "参数信息(info)"。

typedef (ImageBitmap or
         ImageData or
         HTMLImageElement or
         HTMLVideoElement or
         VideoFrame or
         HTMLCanvasElement or
         OffscreenCanvas) GPUCopyExternalImageSource;

dictionary GPUCopyExternalImageSourceInfo {
    required GPUCopyExternalImageSource source;
    GPUOrigin2D origin = {};
    boolean flipY = false;
};

GPUCopyExternalImageSourceInfo 有如下成员:

source类型为 GPUCopyExternalImageSource

纹素复制的源。 复制源数据会在调用 copyExternalImageToTexture() 时被捕获。源尺寸由外部源尺寸表给出。

origin类型为 GPUOrigin2D,默认值为 {}

定义复制起点——要复制的源子区域的最小(左上)角。与 copySize 一起定义完整的复制区域。

flipY类型为 boolean,默认值为 false

描述源图像是否需要垂直翻转。

如果该选项为 true,则复制时会进行垂直翻转:源区域的最后一行被复制到目标区域的第一行,依此类推。 origin 仍然以源图像的左上角为原点,向下递增。

当外部源被用来创建或复制到纹理时,外部源尺寸(external source dimensions) 由源类型决定,见下表:

外部源类型(External Source type) 尺寸(Dimensions)
ImageBitmap ImageBitmap.width, ImageBitmap.height
HTMLImageElement HTMLImageElement.naturalWidth, HTMLImageElement.naturalHeight
HTMLVideoElement 帧固有宽度(intrinsic width of the frame), 帧固有高度(intrinsic height of the frame)
VideoFrame VideoFrame.displayWidth, VideoFrame.displayHeight
ImageData ImageData.width, ImageData.height
HTMLCanvasElementOffscreenCanvas (使用 CanvasRenderingContext2DGPUCanvasContextHTMLCanvasElement.width, HTMLCanvasElement.height
HTMLCanvasElementOffscreenCanvas (使用 WebGLRenderingContextBaseWebGLRenderingContextBase.drawingBufferWidth, WebGLRenderingContextBase.drawingBufferHeight
HTMLCanvasElementOffscreenCanvas (使用 ImageBitmapRenderingContextImageBitmapRenderingContext 的内部输出 bitmap ImageBitmap.width, ImageBitmap.height

11.2.6. 子程序(Subroutines)

GPUTexelCopyTextureInfo 物理子资源尺寸(physical subresource size)

参数:

返回: GPUExtent3D

GPUTexelCopyTextureInfo 物理子资源尺寸的计算如下:

widthheightdepthOrArrayLayers 分别为 texelCopyTextureInfo.texture 子资源(subresource)mipmap 级别 texelCopyTextureInfo.mipLevel物理 mip 层特定纹理范围的宽度、高度和深度。

验证线性纹理数据(validating linear texture data)(layout, byteSize, format, copyExtent)

参数:

GPUTexelCopyBufferLayout layout

线性纹理数据的布局。

GPUSize64 byteSize

线性数据的总字节数。

GPUTextureFormat format

纹理的格式。

GPUExtent3D copyExtent

要复制的纹理范围。

设备时间线步骤:

  1. 令:

  2. 若未满足下述输入校验要求则失败:

  3. 令:

    注意: 这些默认值没有实际影响,因为总是被乘以 0。

  4. requiredBytesInCopy = 0。

  5. copyExtent.depthOrArrayLayers > 0:

    1. requiredBytesInCopy 增加 bytesPerRow × rowsPerImage × (copyExtent.depthOrArrayLayers − 1)。

    2. 如果 heightInBlocks > 0:

      1. requiredBytesInCopy 增加 bytesPerRow × (heightInBlocks − 1) + bytesInLastRow

  6. 若未满足下述条件则失败:

    • 布局应在线性数据范围内: layout.offset + requiredBytesInCopybyteSize

验证纹理复制范围(validating texture copy range)

参数:

GPUTexelCopyTextureInfo texelCopyTextureInfo

要复制到的纹理子资源及复制起点。

GPUExtent3D copySize

纹理的尺寸。

设备时间线步骤:

  1. blockWidth = texelCopyTextureInfo.texture.format纹素块宽度

  2. blockHeight = texelCopyTextureInfo.texture.format纹素块高度

  3. subresourceSize = GPUTexelCopyTextureInfo 物理子资源尺寸

  4. 返回下述所有条件是否满足:

    注意: 纹理复制范围针对 物理(向上取整)尺寸进行校验(对于压缩格式),允许复制操作访问未完全位于纹理内的纹理块。

两个 GPUTextureFormat format1format2 当且仅当以下条件满足时为可复制兼容(copy-compatible)
纹理复制的子资源集合(set of subresources for texture copy)(texelCopyTextureInfo, copySize) 是 texture = texelCopyTextureInfo.texture 的子资源子集,满足如下条件的每个子资源 s

12. Command Buffers

Command buffers are pre-recorded lists of GPU commands (blocks of queue timeline steps) that can be submitted to a GPUQueue for execution. Each GPU command represents a task to be performed on the queue timeline, such as setting state, drawing, copying resources, etc.

A GPUCommandBuffer can only be submitted once, at which point it becomes invalidated. To reuse rendering commands across multiple submissions, use GPURenderBundle.

12.1. GPUCommandBuffer

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

GPUCommandBuffer has the following device timeline properties:

[[command_list]], of type list<GPU command>, readonly

A list of GPU commands to be executed on the Queue timeline when this command buffer is submitted.

[[renderState]], of type RenderState, initially null

The current state used by any render pass commands being executed.

12.1.1. Command Buffer Creation

dictionary GPUCommandBufferDescriptor
         : GPUObjectDescriptorBase {
};

13. Command Encoding

13.1. GPUCommandsMixin

GPUCommandsMixin defines state common to all interfaces which encode commands. It has no methods.

interface mixin GPUCommandsMixin {
};

GPUCommandsMixin has the following device timeline properties:

[[state]], of type encoder state, initially "open"

The current state of the encoder.

[[commands]], of type list<GPU command>, initially []

A list of GPU commands to be executed on the Queue timeline when a GPUCommandBuffer containing these commands is submitted.

The encoder state may be one of the following:

"open"

The encoder is available to encode new commands.

"locked"

The encoder cannot be used, because it is locked by a child encoder: it is a GPUCommandEncoder, and a GPURenderPassEncoder or GPUComputePassEncoder is active. The encoder becomes "open" again when the pass is ended.

Any command issued in this state invalidates the encoder.

"ended"

The encoder has been ended and new commands can no longer be encoded.

Any command issued in this state will generate a validation error.

To Validate the encoder state of GPUCommandsMixin encoder run the
following device timeline steps:
  1. If encoder.[[state]] is:

    "open"

    Return true.

    "locked"

    Invalidate encoder and return false.

    "ended"

    Generate a validation error, and return false.

To Enqueue a command on GPUCommandsMixin encoder which issues the steps of a GPU Command command, run the following device timeline steps:
  1. Append command to encoder.[[commands]].

  2. When command is executed as part of a GPUCommandBuffer:

    1. Issue the steps of command.

13.2. GPUCommandEncoder

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUBuffer destination,
        optional GPUSize64 size);
    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        optional GPUSize64 size);

    undefined copyBufferToTexture(
        GPUTexelCopyBufferInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToBuffer(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyBufferInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToTexture(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined clearBuffer(
        GPUBuffer buffer,
        optional GPUSize64 offset = 0,
        optional GPUSize64 size);

    undefined resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;
GPUCommandEncoder includes GPUCommandsMixin;
GPUCommandEncoder includes GPUDebugCommandsMixin;

13.2.1. Command Encoder Creation

dictionary GPUCommandEncoderDescriptor
         : GPUObjectDescriptorBase {
};
createCommandEncoder(descriptor)

Creates a GPUCommandEncoder.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createCommandEncoder(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUCommandEncoderDescriptor Description of the GPUCommandEncoder to create.

Returns: GPUCommandEncoder

Content timeline steps:

  1. Let e be ! create a new WebGPU object(this, GPUCommandEncoder, descriptor).

  2. Issue the initialization steps on the Device timeline of this.

  3. Return e.

Device timeline initialization steps:
  1. If any of the following conditions are unsatisfied generate a validation error, invalidate e and return.

    • this must not be lost.

Creating a GPUCommandEncoder, encoding a command to clear a buffer, finishing the encoder to get a GPUCommandBuffer, then submitting it to the GPUQueue.
const commandEncoder = gpuDevice.createCommandEncoder();
commandEncoder.clearBuffer(buffer);
const commandBuffer = commandEncoder.finish();
gpuDevice.queue.submit([commandBuffer]);

13.3. Pass Encoding

beginRenderPass(descriptor)

Begins encoding a render pass described by descriptor.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.beginRenderPass(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderPassDescriptor Description of the GPURenderPassEncoder to create.

Returns: GPURenderPassEncoder

Content timeline steps:

  1. For each non-null colorAttachment in descriptor.colorAttachments:

    1. If colorAttachment.clearValue is provided:

      1. ? validate GPUColor shape(colorAttachment.clearValue).

  2. Let pass be a new GPURenderPassEncoder object.

  3. Issue the initialization steps on the Device timeline of this.

  4. Return pass.

Device timeline initialization steps:
  1. Validate the encoder state of this. If it returns false, invalidate pass and return.

  2. Set this.[[state]] to "locked".

  3. Let attachmentRegions be a list of [texture subresource, depthSlice?] pairs, initially empty. Each pair describes the region of the texture to be rendered to, which includes a single depth slice for "3d" textures only.

  4. For each non-null colorAttachment in descriptor.colorAttachments:

    1. Add [colorAttachment.view, colorAttachment.depthSlice ?? null] to attachmentRegions.

    2. If colorAttachment.resolveTarget is not null:

      1. Add [colorAttachment.resolveTarget, undefined] to attachmentRegions.

  5. If any of the following requirements are unmet, invalidate pass and return.

    • descriptor must meet the Valid Usage rules given device this.[[device]].

    • The set of texture regions in attachmentRegions must be pairwise disjoint. That is, no two texture regions may overlap.

  6. Add each texture subresource in attachmentRegions to pass.[[usage scope]] with usage attachment.

  7. Let depthStencilAttachment be descriptor.depthStencilAttachment.

  8. If depthStencilAttachment is not null:

    1. Let depthStencilView be depthStencilAttachment.view.

    2. Add the depth subresource of depthStencilView, if any, to pass.[[usage scope]] with usage attachment-read if depthStencilAttachment.depthReadOnly is true, or attachment otherwise.

    3. Add the stencil subresource of depthStencilView, if any, to pass.[[usage scope]] with usage attachment-read if depthStencilAttachment.stencilReadOnly is true, or attachment otherwise.

    4. Set pass.[[depthReadOnly]] to depthStencilAttachment.depthReadOnly.

    5. Set pass.[[stencilReadOnly]] to depthStencilAttachment.stencilReadOnly.

  9. Set pass.[[layout]] to derive render targets layout from pass(descriptor).

  10. If descriptor.timestampWrites is provided:

    1. Let timestampWrites be descriptor.timestampWrites.

    2. If timestampWrites.beginningOfPassWriteIndex is provided, append a GPU command to this.[[commands]] with the following steps:

      1. Before the pass commands begin executing, write the current queue timestamp into index timestampWrites.beginningOfPassWriteIndex of timestampWrites.querySet.

    3. If timestampWrites.endOfPassWriteIndex is provided, set pass.[[endTimestampWrite]] to a GPU command with the following steps:

      1. After the pass commands finish executing, write the current queue timestamp into index timestampWrites.endOfPassWriteIndex of timestampWrites.querySet.

  11. Set pass.[[drawCount]] to 0.

  12. Set pass.[[maxDrawCount]] to descriptor.maxDrawCount.

  13. Set pass.[[maxDrawCount]] to descriptor.maxDrawCount.

  14. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let the [[renderState]] of the currently executing GPUCommandBuffer be a new RenderState.

  2. Set [[renderState]].[[colorAttachments]] to descriptor.colorAttachments.

  3. Set [[renderState]].[[depthStencilAttachment]] to descriptor.depthStencilAttachment.

  4. For each non-null colorAttachment in descriptor.colorAttachments:

    1. Let colorView be colorAttachment.view.

    2. If colorView.[[descriptor]].dimension is:

      "3d"

      Let colorSubregion be colorAttachment.depthSlice of colorView.

      Otherwise

      Let colorSubregion be colorView.

    3. If colorAttachment.loadOp is:

      "load"

      Ensure the contents of colorSubregion are loaded into the framebuffer memory associated with colorSubregion.

      "clear"

      Set every texel of the framebuffer memory associated with colorSubregion to colorAttachment.clearValue.

  5. If depthStencilAttachment is not null:

    1. If depthStencilAttachment.depthLoadOp is:

      Not provided

      Assert that depthStencilAttachment.depthReadOnly is true and ensure the contents of the depth subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "load"

      Ensure the contents of the depth subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "clear"

      Set every texel of the framebuffer memory associated with the depth subresource of depthStencilView to depthStencilAttachment.depthClearValue.

    2. If depthStencilAttachment.stencilLoadOp is:

      Not provided

      Assert that depthStencilAttachment.stencilReadOnly is true and ensure the contents of the stencil subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "load"

      Ensure the contents of the stencil subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "clear"

      Set every texel of the framebuffer memory associated with the stencil subresource depthStencilView to depthStencilAttachment.stencilClearValue.

Note: Read-only depth-stencil attachments are implicitly treated as though the "load" operation was used. Validation that requires the load op to not be provided for read-only attachments is done in GPURenderPassDepthStencilAttachment Valid Usage.

beginComputePass(descriptor)

Begins encoding a compute pass described by descriptor.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.beginComputePass(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUComputePassDescriptor

Returns: GPUComputePassEncoder

Content timeline steps:

  1. Let pass be a new GPUComputePassEncoder object.

  2. Issue the initialization steps on the Device timeline of this.

  3. Return pass.

Device timeline initialization steps:
  1. Validate the encoder state of this. If it returns false, invalidate pass and return.

  2. Set this.[[state]] to "locked".

  3. If any of the following requirements are unmet, invalidate pass and return.

  4. If descriptor.timestampWrites is provided:

    1. Let timestampWrites be descriptor.timestampWrites.

    2. If timestampWrites.beginningOfPassWriteIndex is provided, append a GPU command to this.[[commands]] with the following steps:

      1. Before the pass commands begin executing, write the current queue timestamp into index timestampWrites.beginningOfPassWriteIndex of timestampWrites.querySet.

    3. If timestampWrites.endOfPassWriteIndex is provided, set pass.[[endTimestampWrite]] to a GPU command with the following steps:

      1. After the pass commands finish executing, write the current queue timestamp into index timestampWrites.endOfPassWriteIndex of timestampWrites.querySet.

13.4. Buffer Copy Commands

copyBufferToBuffer() has two overloads:

copyBufferToBuffer(source, destination, size)

Shorthand, equivalent to copyBufferToBuffer(source, 0, destination, 0, size).

copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of a GPUBuffer to a sub-region of another GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size) method.
Parameter Type Nullable Optional Description
source GPUBuffer The GPUBuffer to copy from.
sourceOffset GPUSize64 Offset in bytes into source to begin copying from.
destination GPUBuffer The GPUBuffer to copy to.
destinationOffset GPUSize64 Offset in bytes into destination to place the copied data.
size GPUSize64 Bytes to copy.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If size is undefined, set it to source.sizesourceOffset.

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Copy size bytes of source, beginning at sourceOffset, into destination, beginning at destinationOffset.

clearBuffer(buffer, offset, size)

Encode a command into the GPUCommandEncoder that fills a sub-region of a GPUBuffer with zeros.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.clearBuffer(buffer, offset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer The GPUBuffer to clear.
offset GPUSize64 Offset in bytes into buffer where the sub-region to clear begins.
size GPUSize64 Size in bytes of the sub-region to clear. Defaults to the size of the buffer minus offset.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If size is missing, set size to max(0, buffer.size - offset).

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Set size bytes of buffer to 0 starting at offset.

13.5. Texel Copy Commands

copyBufferToTexture(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of a GPUBuffer to a sub-region of one or multiple continuous texture subresources.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyBufferToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUTexelCopyBufferInfo Combined with copySize, defines the region of the source buffer.
destination GPUTexelCopyTextureInfo Combined with copySize, defines the region of the destination texture subresource.
copySize GPUExtent3D

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(destination.origin).

  2. ? validate GPUExtent3D shape(copySize).

  3. Issue the subsequent steps on the Device timeline of this.[[device]]:

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let aligned be true.

  3. Let dataLength be source.buffer.size.

  4. If any of the following conditions are unsatisfied, invalidate this and return.

  5. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let blockWidth be the texel block width of destination.texture.

  2. Let blockHeight be the texel block height of destination.texture.

  3. Let dstOrigin be destination.origin.

  4. Let dstBlockOriginX be (dstOrigin.x ÷ blockWidth).

  5. Let dstBlockOriginY be (dstOrigin.y ÷ blockHeight).

  6. Let blockColumns be (copySize.width ÷ blockWidth).

  7. Let blockRows be (copySize.height ÷ blockHeight).

  8. Assert that dstBlockOriginX, dstBlockOriginY, blockColumns, and blockRows are integers.

  9. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let dstSubregion be texture copy sub-region (z + dstOrigin.z) of destination.

    2. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Let blockOffset be the texel block byte offset of source for (x, y, z) of destination.texture.

        2. Set texel block (dstBlockOriginX + x, dstBlockOriginY + y) of dstSubregion to be an equivalent texel representation to the texel block described by source.buffer at offset blockOffset.

copyTextureToBuffer(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple continuous texture subresources to a sub-region of a GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyTextureToBuffer(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUTexelCopyTextureInfo Combined with copySize, defines the region of the source texture subresources.
destination GPUTexelCopyBufferInfo Combined with copySize, defines the region of the destination buffer.
copySize GPUExtent3D

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(source.origin).

  2. ? validate GPUExtent3D shape(copySize).

  3. Issue the subsequent steps on the Device timeline of this.[[device]]:

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let aligned be true.

  3. Let dataLength be destination.buffer.size.

  4. If any of the following conditions are unsatisfied, invalidate this and return.

  5. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let blockWidth be the texel block width of source.texture.

  2. Let blockHeight be the texel block height of source.texture.

  3. Let srcOrigin be source.origin.

  4. Let srcBlockOriginX be (srcOrigin.x ÷ blockWidth).

  5. Let srcBlockOriginY be (srcOrigin.y ÷ blockHeight).

  6. Let blockColumns be (copySize.width ÷ blockWidth).

  7. Let blockRows be (copySize.height ÷ blockHeight).

  8. Assert that srcBlockOriginX, srcBlockOriginY, blockColumns, and blockRows are integers.

  9. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let srcSubregion be texture copy sub-region (z + srcOrigin.z) of source.

    2. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Let blockOffset be the texel block byte offset of destination for (x, y, z) of source.texture.

        2. Set destination.buffer at offset blockOffset to be an equivalent texel representation to texel block (srcBlockOriginX + x, srcBlockOriginY + y) of srcSubregion.

copyTextureToTexture(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple contiguous texture subresources to another sub-region of one or multiple continuous texture subresources.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyTextureToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUTexelCopyTextureInfo Combined with copySize, defines the region of the source texture subresources.
destination GPUTexelCopyTextureInfo Combined with copySize, defines the region of the destination texture subresources.
copySize GPUExtent3D

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(source.origin).

  2. ? validate GPUOrigin3D shape(destination.origin).

  3. ? validate GPUExtent3D shape(copySize).

  4. Issue the subsequent steps on the Device timeline of this.[[device]]:

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let blockWidth be the texel block width of source.texture.

  2. Let blockHeight be the texel block height of source.texture.

  3. Let srcOrigin be source.origin.

  4. Let srcBlockOriginX be (srcOrigin.x ÷ blockWidth).

  5. Let srcBlockOriginY be (srcOrigin.y ÷ blockHeight).

  6. Let dstOrigin be destination.origin.

  7. Let dstBlockOriginX be (dstOrigin.x ÷ blockWidth).

  8. Let dstBlockOriginY be (dstOrigin.y ÷ blockHeight).

  9. Let blockColumns be (copySize.width ÷ blockWidth).

  10. Let blockRows be (copySize.height ÷ blockHeight).

  11. Assert that srcBlockOriginX, srcBlockOriginY, dstBlockOriginX, dstBlockOriginY, blockColumns, and blockRows are integers.

  12. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let srcSubregion be texture copy sub-region (z + srcOrigin.z) of source.

    2. Let dstSubregion be texture copy sub-region (z + dstOrigin.z) of destination.

    3. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Set texel block (dstBlockOriginX + x, dstBlockOriginY + y) of dstSubregion to be an equivalent texel representation to texel block (srcBlockOriginX + x, srcBlockOriginY + y) of srcSubregion.

13.6. Queries

resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset)

Resolves query results from a GPUQuerySet out into a range of a GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset) method.
Parameter Type Nullable Optional Description
querySet GPUQuerySet
firstQuery GPUSize32
queryCount GPUSize32
destination GPUBuffer
destinationOffset GPUSize64

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

    • querySet is valid to use with this.

    • destination is valid to use with this.

    • destination.usage contains QUERY_RESOLVE.

    • firstQuery < the number of queries in querySet.

    • (firstQuery + queryCount) ≤ the number of queries in querySet.

    • destinationOffset is a multiple of 256.

    • destinationOffset + 8 × queryCountdestination.size.

  3. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let queryIndex be firstQuery.

  2. Let offset be destinationOffset.

  3. While queryIndex < firstQuery + queryCount:

    1. Set 8 bytes of destination, beginning at offset, to be the value of querySet at queryIndex.

    2. Set queryIndex to be queryIndex + 1.

    3. Set offset to be offset + 8.

13.7. Finalization

A GPUCommandBuffer containing the commands recorded by the GPUCommandEncoder can be created by calling finish(). Once finish() has been called the command encoder can no longer be used.

finish(descriptor)

Completes recording of the commands sequence and returns a corresponding GPUCommandBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.finish(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUCommandBufferDescriptor

Returns: GPUCommandBuffer

Content timeline steps:

  1. Let commandBuffer be a new GPUCommandBuffer.

  2. Issue the finish steps on the Device timeline of this.[[device]].

  3. Return commandBuffer.

Device timeline finish steps:
  1. Let validationSucceeded be true if all of the following requirements are met, and false otherwise.

  2. Set this.[[state]] to "ended".

  3. If validationSucceeded is false, then:

    1. Generate a validation error.

    2. Return an invalidated GPUCommandBuffer.

  4. Set commandBuffer.[[command_list]] to this.[[commands]].

14. Programmable Passes

interface mixin GPUBindingCommandsMixin {
    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        [AllowShared] Uint32Array dynamicOffsetsData,
        GPUSize64 dynamicOffsetsDataStart,
        GPUSize32 dynamicOffsetsDataLength);
};

GPUBindingCommandsMixin assumes the presence of GPUObjectBase and GPUCommandsMixin members on the same object. It must only be included by interfaces which also include those mixins.

GPUBindingCommandsMixin has the following device timeline properties:

[[bind_groups]], of type ordered map<GPUIndex32, GPUBindGroup>, initially empty

The current GPUBindGroup for each index.

[[dynamic_offsets]], of type ordered map<GPUIndex32, list<GPUBufferDynamicOffset>>, initally empty

The current dynamic offsets for each [[bind_groups]] entry.

14.1. Bind Groups

setBindGroup() has two overloads:

setBindGroup(index, bindGroup, dynamicOffsets)

Sets the current GPUBindGroup for the given index.

Called on: GPUBindingCommandsMixin this.

Arguments:

index, of type GPUIndex32, non-nullable, required

The index to set the bind group at.

bindGroup, of type GPUBindGroup, nullable, required

Bind group to use for subsequent render or compute commands.

dynamicOffsets, of type sequence<GPUBufferDynamicOffset>, non-nullable, defaulting to []

Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer.hasDynamicOffset, ordered by GPUBindGroupLayoutEntry.binding. See note for additional details.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let dynamicOffsetCount be 0 if bindGroup is null, or bindGroup.[[layout]].[[dynamicOffsetCount]] if not.

  3. If any of the following requirements are unmet, invalidate this and return.

  4. If bindGroup is null:

    1. Remove this.[[bind_groups]][index].

    2. Remove this.[[dynamic_offsets]][index].

    Otherwise:

    1. If any of the following requirements are unmet, invalidate this and return.

    2. Set this.[[bind_groups]][index] to be bindGroup.

    3. Set this.[[dynamic_offsets]][index] to be a copy of dynamicOffsets.

    4. If this is a GPURenderCommandsMixin:

      1. For each bindGroup in this.[[bind_groups]], merge bindGroup.[[usedResources]] into this.[[usage scope]]

setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength)

Sets the current GPUBindGroup for the given index, specifying dynamic offsets as a subset of a Uint32Array.

Called on: GPUBindingCommandsMixin this.

Arguments:

Arguments for the GPUBindingCommandsMixin.setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength) method.
Parameter Type Nullable Optional Description
index GPUIndex32 The index to set the bind group at.
bindGroup GPUBindGroup? Bind group to use for subsequent render or compute commands.
dynamicOffsetsData Uint32Array Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer.hasDynamicOffset, ordered by GPUBindGroupLayoutEntry.binding. See note for additional details.
dynamicOffsetsDataStart GPUSize64 Offset in elements into dynamicOffsetsData where the buffer offset data begins.
dynamicOffsetsDataLength GPUSize32 Number of buffer offsets to read from dynamicOffsetsData.

Returns: undefined

Content timeline steps:

  1. If any of the following requirements are unmet, throw a RangeError and return.

    • dynamicOffsetsDataStart must be ≥ 0.

    • dynamicOffsetsDataStart + dynamicOffsetsDataLength must be ≤ dynamicOffsetsData.length.

  2. Let dynamicOffsets be a list containing the range, starting at index dynamicOffsetsDataStart, of dynamicOffsetsDataLength elements of a copy of dynamicOffsetsData.

  3. Call this.setBindGroup(index, bindGroup, dynamicOffsets).

NOTE:
Dynamic offset are applied in GPUBindGroupLayoutEntry.binding order.

This means that if dynamic bindings is the list of each GPUBindGroupLayoutEntry in the GPUBindGroupLayout with buffer?.hasDynamicOffset set to true, sorted by GPUBindGroupLayoutEntry.binding, then dynamic offset[i], as supplied to setBindGroup(), will correspond to dynamic bindings[i].

For a GPUBindGroupLayout created with the following call:
// Note the bindings are listed out-of-order in this array, but it
// doesn’t matter because they will be sorted by binding index.
let layout = gpuDevice.createBindGroupLayout({
    entries: [{
        binding: 1,
        buffer: {},
    }, {
        binding: 2,
        buffer: { dynamicOffset: true },
    }, {
        binding: 0,
        buffer: { dynamicOffset: true },
    }]
});

Used by a GPUBindGroup created with the following call:

// Like above, the array order doesn’t matter here.
// It doesn’t even need to match the order used in the layout.
let bindGroup = gpuDevice.createBindGroup({
    layout: layout,
    entries: [{
        binding: 1,
        resource: { buffer: bufferA, offset: 256 },
    }, {
        binding: 2,
        resource: { buffer: bufferB, offset: 512 },
    }, {
        binding: 0,
        resource: { buffer: bufferC },
    }]
});

And bound with the following call:

pass.setBindGroup(0, bindGroup, [1024, 2048]);

The following buffer offsets will be applied:

Binding Buffer Offset
0 bufferC 1024 (Dynamic)
1 bufferA 256 (Static)
2 bufferB 2560 (Static + Dynamic)
To Iterate over each dynamic binding offset in a given GPUBindGroup bindGroup with a given list of steps to be executed for each dynamic offset, run the following device timeline steps:
  1. Let dynamicOffsetIndex be 0.

  2. Let layout be bindGroup.[[layout]].

  3. For each GPUBindGroupEntry entry in bindGroup.[[entries]] ordered in increasing values of entry.binding:

    1. Let bindingDescriptor be the GPUBindGroupLayoutEntry at layout.[[entryMap]][entry.binding]:

    2. If bindingDescriptor.buffer?.hasDynamicOffset is true:

      1. Let bufferBinding be get as buffer binding(entry.resource).

      2. Let bufferLayout be bindingDescriptor.buffer.

      3. Call steps with bufferBinding, bufferLayout, and dynamicOffsetIndex.

      4. Let dynamicOffsetIndex be dynamicOffsetIndex + 1

Validate encoder bind groups(encoder, pipeline)

Arguments:

GPUBindingCommandsMixin encoder

Encoder whose bind groups are being validated.

GPUPipelineBase pipeline

Pipeline to validate encoders bind groups are compatible with.

Device timeline steps:

  1. If any of the following conditions are unsatisfied, return false:

Otherwise return true.

Encoder bind groups alias a writable resource(encoder, pipeline) if any writable buffer binding range overlaps with any other binding range of the same buffer, or any writable texture binding overlaps in texture subresources with any other texture binding (which may use the same or a different GPUTextureView object).

Note: This algorithm limits the use of the usage scope storage exception.

Arguments:

GPUBindingCommandsMixin encoder

Encoder whose bind groups are being validated.

GPUPipelineBase pipeline

Pipeline to validate encoders bind groups are compatible with.

Device timeline steps:

  1. For each stage in [VERTEX, FRAGMENT, COMPUTE]:

    1. Let bufferBindings be a list of (GPUBufferBinding, boolean) pairs, where the latter indicates whether the resource was used as writable.

    2. Let textureViews be a list of (GPUTextureView, boolean) pairs, where the latter indicates whether the resource was used as writable.

    3. For each pair of (GPUIndex32 bindGroupIndex, GPUBindGroupLayout bindGroupLayout) in pipeline.[[layout]].[[bindGroupLayouts]]:

      1. Let bindGroup be encoder.[[bind_groups]][bindGroupIndex].

      2. Let bindGroupLayoutEntries be bindGroupLayout.[[descriptor]].entries.

      3. Let bufferRanges be the bound buffer ranges of bindGroup, given dynamic offsets encoder.[[dynamic_offsets]][bindGroupIndex]

      4. For each (GPUBindGroupLayoutEntry bindGroupLayoutEntry, GPUBufferBinding resource) in bufferRanges, in which bindGroupLayoutEntry.visibility contains stage:

        1. Let resourceWritable be (bindGroupLayoutEntry.buffer.type == "storage").

        2. For each pair (GPUBufferBinding pastResource, boolean pastResourceWritable) in bufferBindings:

          1. If (resourceWritable or pastResourceWritable) is true, and pastResource and resource are buffer-binding-aliasing, return true.

        3. Append (resource, resourceWritable) to bufferBindings.

      5. For each GPUBindGroupLayoutEntry bindGroupLayoutEntry in bindGroupLayoutEntries, and corresponding GPUTextureView resource in bindGroup, in which bindGroupLayoutEntry.visibility contains stage:

        1. If bindGroupLayoutEntry.storageTexture is not provided, continue.

        2. Let resourceWritable be whether bindGroupLayoutEntry.storageTexture.access is a writable access mode.

        3. For each pair (GPUTextureView pastResource, boolean pastResourceWritable) in textureViews,

          1. If (resourceWritable or pastResourceWritable) is true, and pastResource and resource is texture-view-aliasing, return true.

        4. Append (resource, resourceWritable) to textureViews.

  2. Return false.

Note: Implementations are strongly encouraged to optimize this algorithm.

15. Debug Markers

GPUDebugCommandsMixin provides methods to apply debug labels to groups of commands or insert a single label into the command sequence.

Debug groups can be nested to create a hierarchy of labeled commands, and must be well-balanced.

Like object labels, these labels have no required behavior, but may be shown in error messages and browser developer tools, and may be passed to native API backends.

interface mixin GPUDebugCommandsMixin {
    undefined pushDebugGroup(USVString groupLabel);
    undefined popDebugGroup();
    undefined insertDebugMarker(USVString markerLabel);
};

GPUDebugCommandsMixin assumes the presence of GPUObjectBase and GPUCommandsMixin members on the same object. It must only be included by interfaces which also include those mixins.

GPUDebugCommandsMixin has the following device timeline properties:

[[debug_group_stack]], of type stack<USVString>

A stack of active debug group labels.

GPUDebugCommandsMixin has the following methods:

pushDebugGroup(groupLabel)

Begins a labeled debug group containing subsequent commands.

Called on: GPUDebugCommandsMixin this.

Arguments:

Arguments for the GPUDebugCommandsMixin.pushDebugGroup(groupLabel) method.
Parameter Type Nullable Optional Description
groupLabel USVString The label for the command group.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Push groupLabel onto this.[[debug_group_stack]].

popDebugGroup()

Ends the labeled debug group most recently started by pushDebugGroup().

Called on: GPUDebugCommandsMixin this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following requirements are unmet, invalidate this and return.

  3. Pop an entry off of this.[[debug_group_stack]].

insertDebugMarker(markerLabel)

Marks a point in a stream of commands with a label.

Called on: GPUDebugCommandsMixin this.

Arguments:

Arguments for the GPUDebugCommandsMixin.insertDebugMarker(markerLabel) method.
Parameter Type Nullable Optional Description
markerLabel USVString The label to insert.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

16. Compute Passes

16.1. GPUComputePassEncoder

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePassEncoder {
    undefined setPipeline(GPUComputePipeline pipeline);
    undefined dispatchWorkgroups(GPUSize32 workgroupCountX, optional GPUSize32 workgroupCountY = 1, optional GPUSize32 workgroupCountZ = 1);
    undefined dispatchWorkgroupsIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    undefined end();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUCommandsMixin;
GPUComputePassEncoder includes GPUDebugCommandsMixin;
GPUComputePassEncoder includes GPUBindingCommandsMixin;

GPUComputePassEncoder has the following device timeline properties:

[[command_encoder]], of type GPUCommandEncoder, readonly

The GPUCommandEncoder that created this compute pass encoder.

[[endTimestampWrite]], of type GPU command?, readonly, defaulting to null

GPU command, if any, writing a timestamp when the pass ends.

[[pipeline]], of type GPUComputePipeline, initially null

The current GPUComputePipeline.

16.1.1. Compute Pass Encoder Creation

dictionary GPUComputePassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};
querySet, of type GPUQuerySet

The GPUQuerySet, of type "timestamp", that the query results will be written to.

beginningOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the beginning of the compute pass will be written.

endOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the end of the compute pass will be written.

Note: Timestamp query values are written in nanoseconds, but how the value is determined is implementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.

dictionary GPUComputePassDescriptor
         : GPUObjectDescriptorBase {
    GPUComputePassTimestampWrites timestampWrites;
};
timestampWrites, of type GPUComputePassTimestampWrites

Defines which timestamp values will be written for this pass, and where to write them to.

16.1.2. Dispatch

setPipeline(pipeline)

Sets the current GPUComputePipeline.

Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.setPipeline(pipeline) method.
Parameter Type Nullable Optional Description
pipeline GPUComputePipeline The compute pipeline to use for subsequent dispatch commands.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Set this.[[pipeline]] to be pipeline.

dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ)

Dispatch work to be performed with the current GPUComputePipeline. See § 23.1 Computing for the detailed specification.

Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ) method.
Parameter Type Nullable Optional Description
workgroupCountX GPUSize32 X dimension of the grid of workgroups to dispatch.
workgroupCountY GPUSize32 Y dimension of the grid of workgroups to dispatch.
workgroupCountZ GPUSize32 Z dimension of the grid of workgroups to dispatch.
NOTE:
The x, y, and z values passed to dispatchWorkgroups() and dispatchWorkgroupsIndirect() are the number of workgroups to dispatch for each dimension, not the number of shader invocations to perform across each dimension. This matches the behavior of modern native GPU APIs, but differs from the behavior of OpenCL.

This means that if a GPUShaderModule defines an entry point with @workgroup_size(4, 4), and work is dispatched to it with the call computePass.dispatchWorkgroups(8, 8); the entry point will be invoked 1024 times total: Dispatching a 4x4 workgroup 8 times along both the X and Y axes. (4*4*8*8=1024)

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let usageScope be an empty usage scope.

  3. For each bindGroup in this.[[bind_groups]], merge bindGroup.[[usedResources]] into this.[[usage scope]]

  4. If any of the following conditions are unsatisfied, invalidate this and return.

  5. Let bindingState be a snapshot of this’s current state.

  6. Enqueue a command on this which issues the subsequent steps on the Queue timeline.

Queue timeline steps:
  1. Execute a grid of workgroups with dimensions [workgroupCountX, workgroupCountY, workgroupCountZ] with bindingState.[[pipeline]] using bindingState.[[bind_groups]].

dispatchWorkgroupsIndirect(indirectBuffer, indirectOffset)

Dispatch work to be performed with the current GPUComputePipeline using parameters read from a GPUBuffer. See § 23.1 Computing for the detailed specification.

The indirect dispatch parameters encoded in the buffer must be a tightly packed block of three 32-bit unsigned integer values (12 bytes total), given in the same order as the arguments for dispatchWorkgroups(). For example:

let dispatchIndirectParameters = new Uint32Array(3);
dispatchIndirectParameters[0] = workgroupCountX;
dispatchIndirectParameters[1] = workgroupCountY;
dispatchIndirectParameters[2] = workgroupCountZ;
Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.dispatchWorkgroupsIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect dispatch parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the dispatch data begins.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let usageScope be an empty usage scope.

  3. For each bindGroup in this.[[bind_groups]], merge bindGroup.[[usedResources]] into this.[[usage scope]]

  4. Add indirectBuffer to usageScope with usage input.

  5. If any of the following conditions are unsatisfied, invalidate this and return.

  6. Let bindingState be a snapshot of this’s current state.

  7. Enqueue a command on this which issues the subsequent steps on the Queue timeline.

Queue timeline steps:
  1. Let workgroupCountX be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.

  2. Let workgroupCountY be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.

  3. Let workgroupCountZ be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.

  4. If workgroupCountX, workgroupCountY, or workgroupCountZ is greater than this.device.limits.maxComputeWorkgroupsPerDimension, return.

  5. Execute a grid of workgroups with dimensions [workgroupCountX, workgroupCountY, workgroupCountZ] with bindingState.[[pipeline]] using bindingState.[[bind_groups]].

16.1.3. Finalization

The compute pass encoder can be ended by calling end() once the user has finished recording commands for the pass. Once end() has been called the compute pass encoder can no longer be used.

end()

Completes recording of the compute pass commands sequence.

Called on: GPUComputePassEncoder this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Let parentEncoder be this.[[command_encoder]].

  2. If any of the following requirements are unmet, generate a validation error and return.

  3. Set this.[[state]] to "ended".

  4. Set parentEncoder.[[state]] to "open".

  5. If any of the following requirements are unmet, invalidate parentEncoder and return.

  6. Extend parentEncoder.[[commands]] with this.[[commands]].

  7. If this.[[endTimestampWrite]] is not null:

    1. Extend parentEncoder.[[commands]] with this.[[endTimestampWrite]].

17. Render Passes

17.1. GPURenderPassEncoder

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPassEncoder {
    undefined setViewport(float x, float y,
        float width, float height,
        float minDepth, float maxDepth);

    undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    undefined setBlendConstant(GPUColor color);
    undefined setStencilReference(GPUStencilValue reference);

    undefined beginOcclusionQuery(GPUSize32 queryIndex);
    undefined endOcclusionQuery();

    undefined executeBundles(sequence<GPURenderBundle> bundles);
    undefined end();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUCommandsMixin;
GPURenderPassEncoder includes GPUDebugCommandsMixin;
GPURenderPassEncoder includes GPUBindingCommandsMixin;
GPURenderPassEncoder includes GPURenderCommandsMixin;

GPURenderPassEncoder has the following device timeline properties:

[[command_encoder]], of type GPUCommandEncoder, readonly

The GPUCommandEncoder that created this render pass encoder.

[[attachment_size]], readonly

Set to the following extents:

  • width, height = the dimensions of the pass’s render attachments

[[occlusion_query_set]], of type GPUQuerySet, readonly

The GPUQuerySet to store occlusion query results for the pass, which is initialized with GPURenderPassDescriptor.occlusionQuerySet at pass creation time.

[[endTimestampWrite]], of type GPU command?, readonly, defaulting to null

GPU command, if any, writing a timestamp when the pass ends.

[[maxDrawCount]] of type GPUSize64, readonly

The maximum number of draws allowed in this pass.

[[occlusion_query_active]], of type boolean

Whether the pass’s [[occlusion_query_set]] is being written.

When executing encoded render pass commands as part of a GPUCommandBuffer, an internal RenderState object is used to track the current state required for rendering.

RenderState has the following queue timeline properties:

[[occlusionQueryIndex]], of type GPUSize32

The index into [[occlusion_query_set]] at which to store the occlusion query results.

[[viewport]]

Current viewport rectangle and depth range. Initially set to the following values:

  • x, y = 0.0, 0.0

  • width, height = the dimensions of the pass’s render targets

  • minDepth, maxDepth = 0.0, 1.0

[[scissorRect]]

Current scissor rectangle. Initially set to the following values:

  • x, y = 0, 0

  • width, height = the dimensions of the pass’s render targets

[[blendConstant]], of type GPUColor

Current blend constant value, initially [0, 0, 0, 0].

[[stencilReference]], of type GPUStencilValue

Current stencil reference value, initially 0.

[[colorAttachments]], of type sequence<GPURenderPassColorAttachment?>

The color attachments and state for this render pass.

[[depthStencilAttachment]], of type GPURenderPassDepthStencilAttachment?

The depth/stencil attachment and state for this render pass.

Render passes also have framebuffer memory, which contains the texel data associated with each attachment that is written into by draw commands and read from for blending and depth/stencil testing.

Note: Depending on the GPU hardware, framebuffer memory may be the memory allocated by the attachment textures or may be a separate area of memory that the texture data is copied to and from, such as with tile-based architectures.

17.1.1. Render Pass Encoder Creation

dictionary GPURenderPassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};
querySet, of type GPUQuerySet

The GPUQuerySet, of type "timestamp", that the query results will be written to.

beginningOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the beginning of the render pass will be written.

endOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the end of the render pass will be written.

Note: Timestamp query values are written in nanoseconds, but how the value is determined is implementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.

dictionary GPURenderPassDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachment?> colorAttachments;
    GPURenderPassDepthStencilAttachment depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
    GPURenderPassTimestampWrites timestampWrites;
    GPUSize64 maxDrawCount = 50000000;
};
colorAttachments, of type sequence<GPURenderPassColorAttachment?>

The set of GPURenderPassColorAttachment values in this sequence defines which color attachments will be output to when executing this render pass.

Due to usage compatibility, no color attachment may alias another attachment or any resource used inside the render pass.

depthStencilAttachment, of type GPURenderPassDepthStencilAttachment

The GPURenderPassDepthStencilAttachment value that defines the depth/stencil attachment that will be output to and tested against when executing this render pass.

Due to usage compatibility, no writable depth/stencil attachment may alias another attachment or any resource used inside the render pass.

occlusionQuerySet, of type GPUQuerySet

The GPUQuerySet value defines where the occlusion query results will be stored for this pass.

timestampWrites, of type GPURenderPassTimestampWrites

Defines which timestamp values will be written for this pass, and where to write them to.

maxDrawCount, of type GPUSize64, defaulting to 50000000

The maximum number of draw calls that will be done in the render pass. Used by some implementations to size work injected before the render pass. Keeping the default value is a good default, unless it is known that more draw calls will be done.

Valid Usage

Given a GPUDevice device and GPURenderPassDescriptor this, the following validation rules apply:

  1. this.colorAttachments.size must be ≤ device.[[limits]].maxColorAttachments.

  2. For each non-null colorAttachment in this.colorAttachments:

    1. colorAttachment.view must be valid to use with device.

    2. If colorAttachment.resolveTarget is provided:

      1. colorAttachment.resolveTarget must be valid to use with device.

    3. colorAttachment must meet the GPURenderPassColorAttachment Valid Usage rules.

  3. If this.depthStencilAttachment is provided:

    1. this.depthStencilAttachment.view must be valid to use with device.

    2. this.depthStencilAttachment must meet the GPURenderPassDepthStencilAttachment Valid Usage rules.

  4. There must exist at least one attachment, either:

  5. Validating GPURenderPassDescriptor’s color attachment bytes per sample(device, this.colorAttachments) succeeds.

  6. All views in non-null members of this.colorAttachments, and this.depthStencilAttachment.view if present, must have equal sampleCounts.

  7. For each view in non-null members of this.colorAttachments and this.depthStencilAttachment.view, if present, the [[renderExtent]] must match.

  8. If this.occlusionQuerySet is provided:

    1. this.occlusionQuerySet must be valid to use with device.

    2. this.occlusionQuerySet.type must be occlusion.

  9. If this.timestampWrites is provided:

Validating GPURenderPassDescriptor’s color attachment bytes per sample(device, colorAttachments)

Arguments:

Device timeline steps:

  1. Let formats be an empty list<GPUTextureFormat?>

  2. For each colorAttachment in colorAttachments:

    1. If colorAttachment is undefined, continue.

    2. Append colorAttachment.view.[[descriptor]].format to formats.

  3. Calculating color attachment bytes per sample(formats) must be ≤ device.[[limits]].maxColorAttachmentBytesPerSample.

17.1.1.1. Color Attachments
dictionary GPURenderPassColorAttachment {
    required (GPUTexture or GPUTextureView) view;
    GPUIntegerCoordinate depthSlice;
    (GPUTexture or GPUTextureView) resolveTarget;

    GPUColor clearValue;
    required GPULoadOp loadOp;
    required GPUStoreOp storeOp;
};
view, of type (GPUTexture or GPUTextureView)

Describes the texture subresource that will be output to for this color attachment. The subresource is determined by calling get as texture view(view).

depthSlice, of type GPUIntegerCoordinate

Indicates the depth slice index of "3d" view that will be output to for this color attachment.

resolveTarget, of type (GPUTexture or GPUTextureView)

Describes the texture subresource that will receive the resolved output for this color attachment if view is multisampled. The subresource is determined by calling get as texture view(resolveTarget).

clearValue, of type GPUColor

Indicates the value to clear view to prior to executing the render pass. If not provided, defaults to {r: 0, g: 0, b: 0, a: 0}. Ignored if loadOp is not "clear".

The components of clearValue are all double values. They are converted to a texel value of texture format matching the render attachment. If conversion fails, a validation error is generated.

loadOp, of type GPULoadOp

Indicates the load operation to perform on view prior to executing the render pass.

Note: It is recommended to prefer clearing; see "clear" for details.

storeOp, of type GPUStoreOp

The store operation to perform on view after executing the render pass.

GPURenderPassColorAttachment Valid Usage

Given a GPURenderPassColorAttachment this:

  1. Let renderViewDescriptor be this.view.[[descriptor]].

  2. Let renderTexture be this.view.[[texture]].

  3. All of the requirements in the following steps must be met.

    1. renderViewDescriptor.format must be a color renderable format.

    2. this.view must be a renderable texture view.

    3. If renderViewDescriptor.dimension is "3d":

      1. this.depthSlice must be provided and must be < the depthOrArrayLayers of the logical miplevel-specific texture extent of the renderTexture subresource at mipmap level renderViewDescriptor.baseMipLevel.

      Otherwise:

      1. this.depthSlice must not be provided.

    4. If this.loadOp is "clear":

      1. Converting the IDL value this.clearValue to a texel value of texture format renderViewDescriptor.format must not throw a TypeError.

        Note: An error is not thrown if the value is out-of-range for the format but in-range for the corresponding WGSL primitive type (f32, i32, or u32).

    5. If this.resolveTarget is provided:

      1. Let resolveViewDescriptor be this.resolveTarget.[[descriptor]].

      2. Let resolveTexture be this.resolveTarget.[[texture]].

      3. renderTexture.sampleCount must be > 1.

      4. resolveTexture.sampleCount must be 1.

      5. this.resolveTarget must be a non-3d renderable texture view.

      6. this.resolveTarget.[[renderExtent]] and this.view.[[renderExtent]] must match.

      7. resolveViewDescriptor.format must equal renderViewDescriptor.format.

      8. resolveTexture.format must equal renderTexture.format.

      9. resolveViewDescriptor.format must support resolve according to § 26.1.1 Plain color formats.

A GPUTextureView view is a renderable texture view if the all of the requirements in the following device timeline steps are met:
  1. Let descriptor be view.[[descriptor]].

  2. descriptor.usage must contain RENDER_ATTACHMENT.

  3. descriptor.dimension must be "2d" or "2d-array" or "3d".

  4. descriptor.mipLevelCount must be 1.

  5. descriptor.arrayLayerCount must be 1.

  6. descriptor.aspect must refer to all aspects of view.[[texture]].

Calculating color attachment bytes per sample(formats)

Arguments:

Returns: GPUSize32

  1. Let total be 0.

  2. For each non-null format in formats

    1. Assert: format is a color renderable format.

    2. Let renderTargetPixelByteCost be the render target pixel byte cost of format.

    3. Let renderTargetComponentAlignment be the render target component alignment of format.

    4. Round total up to the smallest multiple of renderTargetComponentAlignment greater than or equal to total.

    5. Add renderTargetPixelByteCost to total.

  3. Return total.

17.1.1.2. Depth/Stencil Attachments
dictionary GPURenderPassDepthStencilAttachment {
    required (GPUTexture or GPUTextureView) view;

    float depthClearValue;
    GPULoadOp depthLoadOp;
    GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    GPUStencilValue stencilClearValue = 0;
    GPULoadOp stencilLoadOp;
    GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};
view, of type (GPUTexture or GPUTextureView)

Describes the texture subresource that will be output to and read from for this depth/stencil attachment. The subresource is determined by calling get as texture view(view).

depthClearValue, of type float

Indicates the value to clear view’s depth component to prior to executing the render pass. Ignored if depthLoadOp is not "clear". Must be between 0.0 and 1.0, inclusive.

depthLoadOp, of type GPULoadOp

Indicates the load operation to perform on view’s depth component prior to executing the render pass.

Note: It is recommended to prefer clearing; see "clear" for details.

depthStoreOp, of type GPUStoreOp

The store operation to perform on view’s depth component after executing the render pass.

depthReadOnly, of type boolean, defaulting to false

Indicates that the depth component of view is read only.

stencilClearValue, of type GPUStencilValue, defaulting to 0

Indicates the value to clear view’s stencil component to prior to executing the render pass. Ignored if stencilLoadOp is not "clear".

The value will be converted to the type of the stencil aspect of view by taking the same number of LSBs as the number of bits in the stencil aspect of one texel of view.

stencilLoadOp, of type GPULoadOp

Indicates the load operation to perform on view’s stencil component prior to executing the render pass.

Note: It is recommended to prefer clearing; see "clear" for details.

stencilStoreOp, of type GPUStoreOp

The store operation to perform on view’s stencil component after executing the render pass.

stencilReadOnly, of type boolean, defaulting to false

Indicates that the stencil component of view is read only.

GPURenderPassDepthStencilAttachment Valid Usage

Given a GPURenderPassDepthStencilAttachment this, the following validation rules apply:

17.1.1.3. Load & Store Operations
enum GPULoadOp {
    "load",
    "clear",
};
"load"

Loads the existing value for this attachment into the render pass.

"clear"

Loads a clear value for this attachment into the render pass.

Note: On some GPU hardware (primarily mobile), "clear" is significantly cheaper because it avoids loading data from main memory into tile-local memory. On other GPU hardware, there isn’t a significant difference. As a result, it is recommended to use "clear" rather than "load" in cases where the initial value doesn’t matter (e.g. the render target will be cleared using a skybox).

enum GPUStoreOp {
    "store",
    "discard",
};
"store"

Stores the resulting value of the render pass for this attachment.

"discard"

Discards the resulting value of the render pass for this attachment.

Note: Discarded attachments behave as if they are cleared to zero, but implementations are not required to perform a clear at the end of the render pass. Implementations which do not explicitly clear discarded attachments at the end of a pass must lazily clear them prior to the reading the attachment contents, which occurs via sampling, copies, attaching to a later render pass with "load", displaying or reading back the canvas (get a copy of the image contents of a context), etc.

17.1.1.4. Render Pass Layout

GPURenderPassLayout declares the layout of the render targets of a GPURenderBundle. It is also used internally to describe GPURenderPassEncoder layouts and GPURenderPipeline layouts. It determines compatibility between render passes, render bundles, and render pipelines.

dictionary GPURenderPassLayout
         : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat?> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};
colorFormats, of type sequence<GPUTextureFormat?>

A list of the GPUTextureFormats of the color attachments for this pass or bundle.

depthStencilFormat, of type GPUTextureFormat

The GPUTextureFormat of the depth/stencil attachment for this pass or bundle.

sampleCount, of type GPUSize32, defaulting to 1

Number of samples per pixel in the attachments for this pass or bundle.

Two GPURenderPassLayout values are equal if:
derive render targets layout from pass

Arguments:

Returns: GPURenderPassLayout

Device timeline steps:

  1. Let layout be a new GPURenderPassLayout object.

  2. For each colorAttachment in descriptor.colorAttachments:

    1. If colorAttachment is not null:

      1. Set layout.sampleCount to colorAttachment.view.[[texture]].sampleCount.

      2. Append colorAttachment.view.[[descriptor]].format to layout.colorFormats.

    2. Otherwise:

      1. Append null to layout.colorFormats.

  3. Let depthStencilAttachment be descriptor.depthStencilAttachment.

  4. If depthStencilAttachment is not null:

    1. Let view be depthStencilAttachment.view.

    2. Set layout.sampleCount to view.[[texture]].sampleCount.

    3. Set layout.depthStencilFormat to view.[[descriptor]].format.

  5. Return layout.

derive render targets layout from pipeline

Arguments:

Returns: GPURenderPassLayout

Device timeline steps:

  1. Let layout be a new GPURenderPassLayout object.

  2. Set layout.sampleCount to descriptor.multisample.count.

  3. If descriptor.depthStencil is provided:

    1. Set layout.depthStencilFormat to descriptor.depthStencil.format.

  4. If descriptor.fragment is provided:

    1. For each colorTarget in descriptor.fragment.targets:

      1. Append colorTarget.format to layout.colorFormats if colorTarget is not null, or append null otherwise.

  5. Return layout.

17.1.2. Finalization

The render pass encoder can be ended by calling end() once the user has finished recording commands for the pass. Once end() has been called the render pass encoder can no longer be used.

end()

Completes recording of the render pass commands sequence.

Called on: GPURenderPassEncoder this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Let parentEncoder be this.[[command_encoder]].

  2. If any of the following requirements are unmet, generate a validation error and return.

  3. Set this.[[state]] to "ended".

  4. Set parentEncoder.[[state]] to "open".

  5. If any of the following requirements are unmet, invalidate parentEncoder and return.

  6. Extend parentEncoder.[[commands]] with this.[[commands]].

  7. If this.[[endTimestampWrite]] is not null:

    1. Extend parentEncoder.[[commands]] with this.[[endTimestampWrite]].

  8. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. For each non-null colorAttachment in renderState.[[colorAttachments]]:

    1. Let colorView be colorAttachment.view.

    2. If colorView.[[descriptor]].dimension is:

      "3d"

      Let colorSubregion be colorAttachment.depthSlice of colorView.

      Otherwise

      Let colorSubregion be colorView.

    3. If colorAttachment.resolveTarget is not null:

      1. Resolve the multiple samples of every texel of colorSubregion to a single sample and copy to colorAttachment.resolveTarget.

    4. If colorAttachment.loadOp is:

      "store"

      Ensure the contents of the framebuffer memory associated with colorSubregion are stored in colorSubregion.

      "discard"

      Set every texel of colorSubregion to zero.

  2. Let depthStencilAttachment be renderState.[[depthStencilAttachment]].

  3. If depthStencilAttachment is not null:

    1. If depthStencilAttachment.depthLoadOp is:

      Not provided

      Assert that depthStencilAttachment.depthReadOnly is true and leave the depth subresource of depthStencilView unchanged.

      "store"

      Ensure the contents of the framebuffer memory associated with the depth subresource of depthStencilView are stored in depthStencilView.

      "discard"

      Set every texel in the depth subresource of depthStencilView to zero.

    2. If depthStencilAttachment.stencilLoadOp is:

      Not provided

      Assert that depthStencilAttachment.stencilReadOnly is true and leave the stencil subresource of depthStencilView unchanged.

      "store"

      Ensure the contents of the framebuffer memory associated with the stencil subresource of depthStencilView are stored in depthStencilView.

      "discard"

      Set every texel in the stencil subresource depthStencilView to zero.

  4. Let renderState be null.

Note: Discarded attachments behave as if they are cleared to zero, but implementations are not required to perform a clear at the end of the render pass. See the note on "discard" for additional details.

Note: Read-only depth-stencil attachments can be thought of as implicitly using the "store" operation, but since their content is unchanged during the render pass implementations don’t need to update the attachment. Validation that requires the store op to not be provided for read-only attachments is done in GPURenderPassDepthStencilAttachment Valid Usage.

17.2. GPURenderCommandsMixin

GPURenderCommandsMixin defines rendering commands common to GPURenderPassEncoder and GPURenderBundleEncoder.

interface mixin GPURenderCommandsMixin {
    undefined setPipeline(GPURenderPipeline pipeline);

    undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat, optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer? buffer, optional GPUSize64 offset = 0, optional GPUSize64 size);

    undefined draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    undefined drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstIndex = 0,
        optional GPUSignedOffset32 baseVertex = 0,
        optional GPUSize32 firstInstance = 0);

    undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

GPURenderCommandsMixin assumes the presence of GPUObjectBase, GPUCommandsMixin, and GPUBindingCommandsMixin members on the same object. It must only be included by interfaces which also include those mixins.

GPURenderCommandsMixin has the following device timeline properties:

[[layout]], of type GPURenderPassLayout, readonly

The layout of the render pass.

[[depthReadOnly]], of type boolean, readonly

If true, indicates that the depth component is not modified.

[[stencilReadOnly]], of type boolean, readonly

If true, indicates that the stencil component is not modified.

[[usage scope]], of type usage scope, initially empty

The usage scope for this render pass or bundle.

[[pipeline]], of type GPURenderPipeline, initially null

The current GPURenderPipeline.

[[index_buffer]], of type GPUBuffer, initially null

The current buffer to read index data from.

[[index_format]], of type GPUIndexFormat

The format of the index data in [[index_buffer]].

[[index_buffer_offset]], of type GPUSize64

The offset in bytes of the section of [[index_buffer]] currently set.

[[index_buffer_size]], of type GPUSize64

The size in bytes of the section of [[index_buffer]] currently set, initially 0.

[[vertex_buffers]], of type ordered map<slot, GPUBuffer>, initially empty

The current GPUBuffers to read vertex data from for each slot.

[[vertex_buffer_sizes]], of type ordered map<slot, GPUSize64>, initially empty

The size in bytes of the section of GPUBuffer currently set for each slot.

[[drawCount]], of type GPUSize64

The number of draw commands recorded in this encoder.

To Enqueue a render command on GPURenderCommandsMixin encoder which issues the steps of a GPU Command command with RenderState renderState, run the following device timeline steps:
  1. Append command to encoder.[[commands]].

  2. When command is executed as part of a GPUCommandBuffer commandBuffer:

    1. Issue the steps of command with commandBuffer.[[renderState]] as renderState.

17.2.1. Drawing

setPipeline(pipeline)

Sets the current GPURenderPipeline.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.setPipeline(pipeline) method.
Parameter Type Nullable Optional Description
pipeline GPURenderPipeline The render pipeline to use for subsequent drawing commands.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let pipelineTargetsLayout be derive render targets layout from pipeline(pipeline.[[descriptor]]).

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Set this.[[pipeline]] to be pipeline.

setIndexBuffer(buffer, indexFormat, offset, size)

Sets the current index buffer.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.setIndexBuffer(buffer, indexFormat, offset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer Buffer containing index data to use for subsequent drawing commands.
indexFormat GPUIndexFormat Format of the index data contained in buffer.
offset GPUSize64 Offset in bytes into buffer where the index data begins. Defaults to 0.
size GPUSize64 Size in bytes of the index data in buffer. Defaults to the size of the buffer minus the offset.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If size is missing, set size to max(0, buffer.size - offset).

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Add buffer to [[usage scope]] with usage input.

  5. Set this.[[index_buffer]] to be buffer.

  6. Set this.[[index_format]] to be indexFormat.

  7. Set this.[[index_buffer_offset]] to be offset.

  8. Set this.[[index_buffer_size]] to be size.

setVertexBuffer(slot, buffer, offset, size)

Sets the current vertex buffer for the given slot.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.setVertexBuffer(slot, buffer, offset, size) method.
Parameter Type Nullable Optional Description
slot GPUIndex32 The vertex buffer slot to set the vertex buffer for.
buffer GPUBuffer? Buffer containing vertex data to use for subsequent drawing commands.
offset GPUSize64 Offset in bytes into buffer where the vertex data begins. Defaults to 0.
size GPUSize64 Size in bytes of the vertex data in buffer. Defaults to the size of the buffer minus the offset.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let bufferSize be 0 if buffer is null, or buffer.size if not.

  3. If size is missing, set size to max(0, bufferSize - offset).

  4. If any of the following requirements are unmet, invalidate this and return.

  5. If buffer is null:

    1. Remove this.[[vertex_buffers]][slot].

    2. Remove this.[[vertex_buffer_sizes]][slot].

    Otherwise:

    1. If any of the following requirements are unmet, invalidate this and return.

    2. Add buffer to [[usage scope]] with usage input.

    3. Set this.[[vertex_buffers]][slot] to be buffer.

    4. Set this.[[vertex_buffer_sizes]][slot] to be size.

draw(vertexCount, instanceCount, firstVertex, firstInstance)

Draws primitives. See § 23.2 Rendering for the detailed specification.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.draw(vertexCount, instanceCount, firstVertex, firstInstance) method.
Parameter Type Nullable Optional Description
vertexCount GPUSize32 The number of vertices to draw.
instanceCount GPUSize32 The number of instances to draw.
firstVertex GPUSize32 Offset into the vertex buffers, in vertices, to begin drawing from.
firstInstance GPUSize32 First instance to draw.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. All of the requirements in the following steps must be met. If any are unmet, invalidate this and return.

    1. It must be valid to draw with this.

    2. Let buffers be this.[[pipeline]].[[descriptor]].vertex.buffers.

    3. For each GPUIndex32 slot from 0 to buffers.size (non-inclusive):

      1. If buffers[slot] is null, continue.

      2. Let bufferSize be this.[[vertex_buffer_sizes]][slot].

      3. Let stride be buffers[slot].arrayStride.

      4. Let attributes be buffers[slot].attributes

      5. Let lastStride be the maximum value of (attribute.offset + byteSize(attribute.format)) over each attribute in attributes, or 0 if attributes is empty.

      6. Let strideCount be computed based on buffers[slot].stepMode:

        "vertex"

        firstVertex + vertexCount

        "instance"

        firstInstance + instanceCount

      7. If strideCount0:

        1. (strideCount1) × stride + lastStride must be ≤ bufferSize.

  3. Increment this.[[drawCount]] by 1.

  4. Let bindingState be a snapshot of this’s current state.

  5. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of vertexCount vertices, starting with vertex firstVertex, with the states from bindingState and renderState.

drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance)

Draws indexed primitives. See § 23.2 Rendering for the detailed specification.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance) method.
Parameter Type Nullable Optional Description
indexCount GPUSize32 The number of indices to draw.
instanceCount GPUSize32 The number of instances to draw.
firstIndex GPUSize32 Offset into the index buffer, in indices, begin drawing from.
baseVertex GPUSignedOffset32 Added to each index value before indexing into the vertex buffers.
firstInstance GPUSize32 First instance to draw.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Increment this.[[drawCount]] by 1.

  4. Let bindingState be a snapshot of this’s current state.

  5. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of indexCount indexed vertices, starting with index firstIndex from vertex baseVertex, with the states from bindingState and renderState.

Note: WebGPU applications should never use index data with indices out of bounds of any bound vertex buffer that has GPUVertexStepMode "vertex". WebGPU implementations have different ways of handling this, and therefore a range of behaviors is allowed. Either the whole draw call is discarded, or the access to those attributes out of bounds is described by WGSL’s invalid memory reference.

drawIndirect(indirectBuffer, indirectOffset)

Draws primitives using parameters read from a GPUBuffer. See § 23.2 Rendering for the detailed specification.

The indirect draw parameters encoded in the buffer must be a tightly packed block of four 32-bit unsigned integer values (16 bytes total), given in the same order as the arguments for draw(). For example:

let drawIndirectParameters = new Uint32Array(4);
drawIndirectParameters[0] = vertexCount;
drawIndirectParameters[1] = instanceCount;
drawIndirectParameters[2] = firstVertex;
drawIndirectParameters[3] = firstInstance;

The value corresponding to firstInstance must be 0, unless the "indirect-first-instance" feature is enabled. If the "indirect-first-instance" feature is not enabled and firstInstance is not zero the drawIndirect() call will be treated as a no-op.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.drawIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect draw parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the drawing data begins.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Add indirectBuffer to [[usage scope]] with usage input.

  4. Increment this.[[drawCount]] by 1.

  5. Let bindingState be a snapshot of this’s current state.

  6. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Let vertexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.

  2. Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.

  3. Let firstVertex be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.

  4. Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 12) bytes.

  5. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of vertexCount vertices, starting with vertex firstVertex, with the states from bindingState and renderState.

drawIndexedIndirect(indirectBuffer, indirectOffset)

Draws indexed primitives using parameters read from a GPUBuffer. See § 23.2 Rendering for the detailed specification.

The indirect drawIndexed parameters encoded in the buffer must be a tightly packed block of five 32-bit values (20 bytes total), given in the same order as the arguments for drawIndexed(). The value corresponding to baseVertex is a signed 32-bit integer, and all others are unsigned 32-bit integers. For example:

let drawIndexedIndirectParameters = new Uint32Array(5);
let drawIndexedIndirectParametersSigned = new Int32Array(drawIndexedIndirectParameters.buffer);
drawIndexedIndirectParameters[0] = indexCount;
drawIndexedIndirectParameters[1] = instanceCount;
drawIndexedIndirectParameters[2] = firstIndex;
// baseVertex is a signed value.
drawIndexedIndirectParametersSigned[3] = baseVertex;
drawIndexedIndirectParameters[4] = firstInstance;

The value corresponding to firstInstance must be 0, unless the "indirect-first-instance" feature is enabled. If the "indirect-first-instance" feature is not enabled and firstInstance is not zero the drawIndexedIndirect() call will be treated as a no-op.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.drawIndexedIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect drawIndexed parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the drawing data begins.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Add indirectBuffer to [[usage scope]] with usage input.

  4. Increment this.[[drawCount]] by 1.

  5. Let bindingState be a snapshot of this’s current state.

  6. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Let indexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.

  2. Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.

  3. Let firstIndex be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.

  4. Let baseVertex be a signed 32-bit integer read from indirectBuffer at (indirectOffset + 12) bytes.

  5. Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 16) bytes.

  6. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of indexCount indexed vertices, starting with index firstIndex from vertex baseVertex, with the states from bindingState and renderState.

To determine if it’s valid to draw with GPURenderCommandsMixin encoder, run the following device timeline steps:
  1. If any of the following conditions are unsatisfied, return false:

  2. Otherwise return true.

To determine if it’s valid to draw indexed with GPURenderCommandsMixin encoder, run the following device timeline steps:
  1. If any of the following conditions are unsatisfied, return false:

  2. Otherwise return true.

17.2.2. Rasterization state

The GPURenderPassEncoder has several methods which affect how draw commands are rasterized to attachments used by this encoder.

setViewport(x, y, width, height, minDepth, maxDepth)

Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setViewport(x, y, width, height, minDepth, maxDepth) method.
Parameter Type Nullable Optional Description
x float Minimum X value of the viewport in pixels.
y float Minimum Y value of the viewport in pixels.
width float Width of the viewport in pixels.
height float Height of the viewport in pixels.
minDepth float Minimum depth value of the viewport.
maxDepth float Maximum depth value of the viewport.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let maxViewportRange be this.limits.maxTextureDimension2D × 2.

  3. If any of the following conditions are unsatisfied, invalidate this and return.

    • x ≥ -maxViewportRange

    • y ≥ -maxViewportRange

    • 0widththis.limits.maxTextureDimension2D

    • 0heightthis.limits.maxTextureDimension2D

    • x + widthmaxViewportRange1

    • y + heightmaxViewportRange1

    • 0.0minDepth1.0

    • 0.0maxDepth1.0

    • minDepthmaxDepth

  4. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Round x, y, width, and height to some uniform precision, no less precise than integer rounding.

  2. Set renderState.[[viewport]] to the extents x, y, width, height, minDepth, and maxDepth.

setScissorRect(x, y, width, height)

Sets the scissor rectangle used during the rasterization stage. After transformation into viewport coordinates any fragments which fall outside the scissor rectangle will be discarded.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setScissorRect(x, y, width, height) method.
Parameter Type Nullable Optional Description
x GPUIntegerCoordinate Minimum X value of the scissor rectangle in pixels.
y GPUIntegerCoordinate Minimum Y value of the scissor rectangle in pixels.
width GPUIntegerCoordinate Width of the scissor rectangle in pixels.
height GPUIntegerCoordinate Height of the scissor rectangle in pixels.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[scissorRect]] to the extents x, y, width, and height.

setBlendConstant(color)

Sets the constant blend color and alpha values used with "constant" and "one-minus-constant" GPUBlendFactors.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setBlendConstant(color) method.
Parameter Type Nullable Optional Description
color GPUColor The color to use when blending.

Returns: undefined

Content timeline steps:

  1. ? validate GPUColor shape(color).

  2. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[blendConstant]] to color.

setStencilReference(reference)

Sets the [[stencilReference]] value used during stencil tests with the "replace" GPUStencilOperation.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setStencilReference(reference) method.
Parameter Type Nullable Optional Description
reference GPUStencilValue The new stencil reference value.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[stencilReference]] to reference.

17.2.3. Queries

beginOcclusionQuery(queryIndex)
Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.beginOcclusionQuery(queryIndex) method.
Parameter Type Nullable Optional Description
queryIndex GPUSize32 The index of the query in the query set.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Set this.[[occlusion_query_active]] to true.

  4. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[occlusionQueryIndex]] to queryIndex.

endOcclusionQuery()
Called on: GPURenderPassEncoder this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Set this.[[occlusion_query_active]] to false.

  4. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Let passingFragments be non-zero if any fragment samples passed all per-fragment tests since the corresponding beginOcclusionQuery() command was executed, and zero otherwise.

    Note: If no draw calls occurred, passingFragments is zero.

  2. Write passingFragments into this.[[occlusion_query_set]] at index renderState.[[occlusionQueryIndex]].

17.2.4. Bundles

executeBundles(bundles)

Executes the commands previously recorded into the given GPURenderBundles as part of this render pass.

When a GPURenderBundle is executed, it does not inherit the render pass’s pipeline, bind groups, or vertex and index buffers. After a GPURenderBundle has executed, the render pass’s pipeline, bind group, and vertex/index buffer state is cleared (to the initial, empty values).

Note: The state is cleared, not restored to the previous state. This occurs even if zero GPURenderBundles are executed.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.executeBundles(bundles) method.
Parameter Type Nullable Optional Description
bundles sequence<GPURenderBundle> List of render bundles to execute.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. For each bundle in bundles:

    1. Increment this.[[drawCount]] by bundle.[[drawCount]].

    2. Merge bundle.[[usage scope]] into this.[[usage scope]].

    3. Enqueue a render command on this which issues the following steps on the Queue timeline with renderState when executed:

      Queue timeline steps:
      1. Execute each command in bundle.[[command_list]] with renderState.

        Note: renderState cannot be changed by executing render bundles. Binding state was already captured at bundle encoding time, and so isn’t used when executing bundles.

  4. Reset the render pass binding state of this.

To Reset the render pass binding state of GPURenderPassEncoder encoder run the following device timeline steps:
  1. Clear encoder.[[bind_groups]].

  2. Set encoder.[[pipeline]] to null.

  3. Set encoder.[[index_buffer]] to null.

  4. Clear encoder.[[vertex_buffers]].

18. Bundles

A bundle is a partial, limited pass that is encoded once and can then be executed multiple times as part of future pass encoders without expiring after use like typical command buffers. This can reduce the overhead of encoding and submission of commands which are issued repeatedly without changing.

18.1. GPURenderBundle

[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;
[[command_list]], of type list<GPU command>

A list of GPU commands to be submitted to the GPURenderPassEncoder when the GPURenderBundle is executed.

[[usage scope]], of type usage scope, initially empty

The usage scope for this render bundle, stored for later merging into the GPURenderPassEncoder’s [[usage scope]] in executeBundles().

[[layout]], of type GPURenderPassLayout

The layout of the render bundle.

[[depthReadOnly]], of type boolean

If true, indicates that the depth component is not modified by executing this render bundle.

[[stencilReadOnly]], of type boolean

If true, indicates that the stencil component is not modified by executing this render bundle.

[[drawCount]], of type GPUSize64

The number of draw commands in this GPURenderBundle.

18.1.1. Render Bundle Creation

dictionary GPURenderBundleDescriptor
         : GPUObjectDescriptorBase {
};
[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUCommandsMixin;
GPURenderBundleEncoder includes GPUDebugCommandsMixin;
GPURenderBundleEncoder includes GPUBindingCommandsMixin;
GPURenderBundleEncoder includes GPURenderCommandsMixin;
createRenderBundleEncoder(descriptor)

Creates a GPURenderBundleEncoder.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createRenderBundleEncoder(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderBundleEncoderDescriptor Description of the GPURenderBundleEncoder to create.

Returns: GPURenderBundleEncoder

Content timeline steps:

  1. ? Validate texture format required features of each non-null element of descriptor.colorFormats with this.[[device]].

  2. If descriptor.depthStencilFormat is provided:

    1. ? Validate texture format required features of descriptor.depthStencilFormat with this.[[device]].

  3. Let e be ! create a new WebGPU object(this, GPURenderBundleEncoder, descriptor).

  4. Issue the initialization steps on the Device timeline of this.

  5. Return e.

Device timeline initialization steps:
  1. If any of the following conditions are unsatisfied generate a validation error, invalidate e and return.

  2. Set e.[[layout]] to a copy of descriptor’s included GPURenderPassLayout interface.

  3. Set e.[[depthReadOnly]] to descriptor.depthReadOnly.

  4. Set e.[[stencilReadOnly]] to descriptor.stencilReadOnly.

  5. Set e.[[state]] to "open".

  6. Set e.[[drawCount]] to 0.

18.1.2. Encoding

dictionary GPURenderBundleEncoderDescriptor
         : GPURenderPassLayout {
    boolean depthReadOnly = false;
    boolean stencilReadOnly = false;
};
depthReadOnly, of type boolean, defaulting to false

If true, indicates that the render bundle does not modify the depth component of the GPURenderPassDepthStencilAttachment of any render pass the render bundle is executed in.

See read-only depth-stencil.

stencilReadOnly, of type boolean, defaulting to false

If true, indicates that the render bundle does not modify the stencil component of the GPURenderPassDepthStencilAttachment of any render pass the render bundle is executed in.

See read-only depth-stencil.

18.1.3. Finalization

finish(descriptor)

Completes recording of the render bundle commands sequence.

Called on: GPURenderBundleEncoder this.

Arguments:

Arguments for the GPURenderBundleEncoder.finish(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderBundleDescriptor

Returns: GPURenderBundle

Content timeline steps:

  1. Let renderBundle be a new GPURenderBundle.

  2. Issue the finish steps on the Device timeline of this.[[device]].

  3. Return renderBundle.

Device timeline finish steps:
  1. Let validationSucceeded be true if all of the following requirements are met, and false otherwise.

  2. Set this.[[state]] to "ended".

  3. If validationSucceeded is false, then:

    1. Generate a validation error.

    2. Return an invalidated GPURenderBundle.

  4. Set renderBundle.[[command_list]] to this.[[commands]].

  5. Set renderBundle.[[usage scope]] to this.[[usage scope]].

  6. Set renderBundle.[[drawCount]] to this.[[drawCount]].

19. Queues

19.1. GPUQueueDescriptor

GPUQueueDescriptor describes a queue request.

dictionary GPUQueueDescriptor
         : GPUObjectDescriptorBase {
};

19.2. GPUQueue

[Exposed=(Window, Worker), SecureContext]
interface GPUQueue {
    undefined submit(sequence<GPUCommandBuffer> commandBuffers);

    Promise<undefined> onSubmittedWorkDone();

    undefined writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        AllowSharedBufferSource data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    undefined writeTexture(
        GPUTexelCopyTextureInfo destination,
        AllowSharedBufferSource data,
        GPUTexelCopyBufferLayout dataLayout,
        GPUExtent3D size);

    undefined copyExternalImageToTexture(
        GPUCopyExternalImageSourceInfo source,
        GPUCopyExternalImageDestInfo destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

GPUQueue has the following methods:

writeBuffer(buffer, bufferOffset, data, dataOffset, size)

Issues a write operation of the provided data into a GPUBuffer.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.writeBuffer(buffer, bufferOffset, data, dataOffset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer The buffer to write to.
bufferOffset GPUSize64 Offset in bytes into buffer to begin writing at.
data AllowSharedBufferSource Data to write into buffer.
dataOffset GPUSize64 Offset in into data to begin writing from. Given in elements if data is a TypedArray and bytes otherwise.
size GPUSize64 Size of content to write from data to buffer. Given in elements if data is a TypedArray and bytes otherwise.

Returns: undefined

Content timeline steps:

  1. If data is an ArrayBuffer or DataView, let the element type be "byte". Otherwise, data is a TypedArray; let the element type be the type of the TypedArray.

  2. Let dataSize be the size of data, in elements.

  3. If size is missing, let contentsSize be dataSizedataOffset. Otherwise, let contentsSize be size.

  4. If any of the following conditions are unsatisfied, throw an OperationError and return.

    • contentsSize ≥ 0.

    • dataOffset + contentsSizedataSize.

    • contentsSize, converted to bytes, is a multiple of 4 bytes.

  5. Let dataContents be a copy of the bytes held by the buffer source data.

  6. Let contents be the contentsSize elements of dataContents starting at an offset of dataOffset elements.

  7. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. If any of the following conditions are unsatisfied, generate a validation error and return.

  2. Issue the subsequent steps on the Queue timeline of this.

Queue timeline steps:
  1. Write contents into buffer starting at bufferOffset.

writeTexture(destination, data, dataLayout, size)

Issues a write operation of the provided data into a GPUTexture.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.writeTexture(destination, data, dataLayout, size) method.
Parameter Type Nullable Optional Description
destination GPUTexelCopyTextureInfo The texture subresource and origin to write to.
data AllowSharedBufferSource Data to write into destination.
dataLayout GPUTexelCopyBufferLayout Layout of the content in data.
size GPUExtent3D Extents of the content to write from data to destination.

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(destination.origin).

  2. ? validate GPUExtent3D shape(size).

  3. Let dataBytes be a copy of the bytes held by the buffer source data.

    Note: This is described as copying all of data to the device timeline, but in practice data could be much larger than necessary. Implementations should optimize by copying only the necessary bytes.

  4. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. Let aligned be false.

  2. Let dataLength be dataBytes.length.

  3. If any of the following conditions are unsatisfied, generate a validation error and return.

    Note: unlike GPUCommandEncoder.copyBufferToTexture(), there is no alignment requirement on either dataLayout.bytesPerRow or dataLayout.offset.

  4. Issue the subsequent steps on the Queue timeline of this.

Queue timeline steps:
  1. Let blockWidth be the texel block width of destination.texture.

  2. Let blockHeight be the texel block height of destination.texture.

  3. Let dstOrigin be destination.origin;

  4. Let dstBlockOriginX be (dstOrigin.x ÷ blockWidth).

  5. Let dstBlockOriginY be (dstOrigin.y ÷ blockHeight).

  6. Let blockColumns be (copySize.width ÷ blockWidth).

  7. Let blockRows be (copySize.height ÷ blockHeight).

  8. Assert that dstBlockOriginX, dstBlockOriginY, blockColumns, and blockRows are integers.

  9. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let dstSubregion be texture copy sub-region (z + dstOrigin.z) of destination.

    2. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Let blockOffset be the texel block byte offset of dataLayout for (x, y, z) of destination.texture.

        2. Set texel block (dstBlockOriginX + x, dstBlockOriginY + y) of dstSubregion to be an equivalent texel representation to the texel block described by dataBytes at offset blockOffset.

copyExternalImageToTexture(source, destination, copySize)

Issues a copy operation of the contents of a platform image/canvas into the destination texture.

This operation performs color encoding into the destination encoding according to the parameters of GPUCopyExternalImageDestInfo.

Copying into a -srgb texture results in the same texture bytes, not the same decoded values, as copying into the corresponding non--srgb format. Thus, after a copy operation, sampling the destination texture has different results depending on whether its format is -srgb, all else unchanged.

NOTE:
When copying from a "webgl"/"webgl2" context canvas, the WebGL Drawing Buffer may be not exist during certain points in the frame presentation cycle (after the image has been moved to the compositor for display). To avoid this, either:
  • Issue copyExternalImageToTexture() in the same task with WebGL rendering operation, to ensure the copy occurs before the WebGL canvas is presented.

  • If not possible, set the preserveDrawingBuffer option in WebGLContextAttributes to true, so that the drawing buffer will still contain a copy of the frame contents after they’ve been presented. Note, this extra copy may have a performance cost.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.copyExternalImageToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUCopyExternalImageSourceInfo source image and origin to copy to destination.
destination GPUCopyExternalImageDestInfo The texture subresource and origin to write to, and its encoding metadata.
copySize GPUExtent3D Extents of the content to write from source to destination.

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin2D shape(source.origin).

  2. ? validate GPUOrigin3D shape(destination.origin).

  3. ? validate GPUExtent3D shape(copySize).

  4. Let sourceImage be source.source

  5. If sourceImage is not origin-clean, throw a SecurityError and return.

  6. If any of the following requirements are unmet, throw an OperationError and return.

    • source.origin.x + copySize.width must be ≤ the width of sourceImage.

    • source.origin.y + copySize.height must be ≤ the height of sourceImage.

    • copySize.depthOrArrayLayers must be ≤ 1.

  7. Let usability be ? check the usability of the image argument(source).

  8. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. Let texture be destination.texture.

  2. If any of the following requirements are unmet, generate a validation error and return.

  3. If copySize.depthOrArrayLayers is > 0, issue the subsequent steps on the Queue timeline of this.

Queue timeline steps:
  1. Assert that the texel block width of destination.texture is 1, the texel block height of destination.texture is 1, and that copySize.depthOrArrayLayers is 1.

  2. Let srcOrigin be source.origin.

  3. Let dstOrigin be destination.origin.

  4. Let dstSubregion be texture copy sub-region (dstOrigin.z) of destination.

  5. For each y in the range [0, copySize.height − 1]:

    1. Let srcY be y if source.flipY is false and (copySize.height − 1 − y) otherwise.

    2. For each x in the range [0, copySize.width − 1]:

      1. Set texel block (dstOrigin.x + x, dstOrigin.y + y) of dstSubregion to be an equivalent texel representation of the pixel at (srcOrigin.x + x, srcOrigin.y + srcY) of source.source after applying any color encoding required by destination.colorSpace and destination.premultipliedAlpha.

submit(commandBuffers)

Schedules the execution of the command buffers by the GPU on this queue.

Submitted command buffers cannot be used again.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.submit(commandBuffers) method.
Parameter Type Nullable Optional Description
commandBuffers sequence<GPUCommandBuffer>

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this:

Device timeline steps:
  1. If any of the following requirements are unmet, generate a validation error, invalidate each GPUCommandBuffer in commandBuffers and return.

  2. For each commandBuffer in commandBuffers:

    1. Invalidate commandBuffer.

  3. Issue the subsequent steps on the Queue timeline of this:

Queue timeline steps:
  1. For each commandBuffer in commandBuffers:

    1. Execute each command in commandBuffer.[[command_list]].

onSubmittedWorkDone()

Returns a Promise that resolves once this queue finishes processing all the work submitted up to this moment.

Resolution of this Promise implies the completion of mapAsync() calls made prior to that call, on GPUBuffers last used exclusively on that queue.

Called on: GPUQueue this.

Returns: Promise<undefined>

Content timeline steps:

  1. Let contentTimeline be the current Content timeline.

  2. Let promise be a new promise.

  3. Issue the synchronization steps on the Device timeline of this.

  4. Return promise.

Device timeline synchronization steps:
  1. Let event occur upon the completion of all currently-enqueued operations.

  2. Listen for timeline event event on this.[[device]], handled by the subsequent steps on contentTimeline.

Content timeline steps:
  1. Resolve promise.

20. Queries

20.1. GPUQuerySet

[Exposed=(Window, Worker), SecureContext]
interface GPUQuerySet {
    undefined destroy();

    readonly attribute GPUQueryType type;
    readonly attribute GPUSize32Out count;
};
GPUQuerySet includes GPUObjectBase;

GPUQuerySet has the following immutable properties:

type, of type GPUQueryType, readonly

The type of the queries managed by this GPUQuerySet.

count, of type GPUSize32Out, readonly

The number of queries managed by this GPUQuerySet.

GPUQuerySet has the following device timeline properties:

[[destroyed]], of type boolean, initially false

If the query set is destroyed, it can no longer be used in any operation, and its underlying memory can be freed.

20.1.1. QuerySet Creation

A GPUQuerySetDescriptor specifies the options to use in creating a GPUQuerySet.

dictionary GPUQuerySetDescriptor
         : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
};
type, of type GPUQueryType

The type of queries managed by GPUQuerySet.

count, of type GPUSize32

The number of queries managed by GPUQuerySet.

createQuerySet(descriptor)

Creates a GPUQuerySet.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createQuerySet(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUQuerySetDescriptor Description of the GPUQuerySet to create.

Returns: GPUQuerySet

Content timeline steps:

  1. If descriptor.type is "timestamp", but "timestamp-query" is not enabled for this:

    1. Throw a TypeError.

  2. Let q be ! create a new WebGPU object(this, GPUQuerySet, descriptor).

  3. Set q.type to descriptor.type.

  4. Set q.count to descriptor.count.

  5. Issue the initialization steps on the Device timeline of this.

  6. Return q.

Device timeline initialization steps:
  1. If any of the following requirements are unmet, generate a validation error, invalidate q and return.

    • this must not be lost.

    • descriptor.count must be ≤ 4096.

  2. Create a device allocation for q where each entry in the query set is zero.

    If the allocation fails without side-effects, generate an out-of-memory error, invalidate q, and return.

Creating a GPUQuerySet which holds 32 occlusion query results.
const querySet = gpuDevice.createQuerySet({
    type: 'occlusion',
    count: 32
});

20.1.2. Query Set Destruction

An application that no longer requires a GPUQuerySet can choose to lose access to it before garbage collection by calling destroy().

GPUQuerySet has the following methods:

destroy()

Destroys the GPUQuerySet.

Called on: GPUQuerySet this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the device timeline.

Device timeline steps:
  1. Set this.[[destroyed]] to true.

20.2. QueryType

enum GPUQueryType {
    "occlusion",
    "timestamp",
};

20.3. Occlusion Query

Occlusion query is only available on render passes, to query the number of fragment samples that pass all the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha to coverage, stencil, and depth tests. Any non-zero result value for the query indicates that at least one sample passed the tests and reached the output merging stage of the render pipeline, 0 indicates that no samples passed the tests.

When beginning a render pass, GPURenderPassDescriptor.occlusionQuerySet must be set to be able to use occlusion queries during the pass. An occlusion query is begun and ended by calling beginOcclusionQuery() and endOcclusionQuery() in pairs that cannot be nested, and resolved into a GPUBuffer as a 64-bit unsigned integer by GPUCommandEncoder.resolveQuerySet().

20.4. Timestamp Query

Timestamp queries allow applications to write timestamps to a GPUQuerySet, using:

and then resolve timestamp values (in nanoseconds as a 64-bit unsigned integer) into a GPUBuffer, using GPUCommandEncoder.resolveQuerySet().

Timestamp values are implementation-defined and may not increase monotonically. The physical device may reset the timestamp counter occasionally, which can result in unexpected values such as negative deltas between timestamps that logically should be monotonically increasing. These instances should be rare and can safely be ignored. Applications should not be written in such a way that unexpected timestamps cause an application failure.

There is a tracking vector here. Timestamp queries are implemented using high-resolution timers (see § 2.1.7.2 Device/queue-timeline timing). To mitigate security and privacy concerns, their precision must be reduced:

To get the current queue timestamp, run the following queue timeline steps:

Note: Since cross-origin isolation may not apply to the device timeline or queue timeline, crossOriginIsolatedCapability is never set to true.

Validate timestampWrites(device, timestampWrites)

Arguments:

Device timeline steps:

  1. Return true if the following requirements are met, and false if not:

21. Canvas Rendering

21.1. HTMLCanvasElement.getContext()

A GPUCanvasContext object is created via the getContext() method of an HTMLCanvasElement instance by passing the string literal 'webgpu' as its contextType argument.

Get a GPUCanvasContext from an offscreen HTMLCanvasElement:
const canvas = document.createElement('canvas');
const context = canvas.getContext('webgpu');

Unlike WebGL or 2D context creation, the second argument of HTMLCanvasElement.getContext() or OffscreenCanvas.getContext(), the context creation attribute dictionary options, is ignored. Instead, use GPUCanvasContext.configure(), which allows changing the canvas configuration without replacing the canvas.

To create a 'webgpu' context on a canvas (HTMLCanvasElement or OffscreenCanvas) canvas, run the following content timeline steps:
  1. Let context be a new GPUCanvasContext.

  2. Set context.canvas to canvas.

  3. Replace the drawing buffer of context.

  4. Return context.

Note: User agents should consider issuing developer-visible warnings when an ignored options argument is provided when calling getContext() to get a WebGPU canvas context.

21.2. GPUCanvasContext

[Exposed=(Window, Worker), SecureContext]
interface GPUCanvasContext {
    readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;

    undefined configure(GPUCanvasConfiguration configuration);
    undefined unconfigure();

    GPUCanvasConfiguration? getConfiguration();
    GPUTexture getCurrentTexture();
};

GPUCanvasContext has the following content timeline properties:

canvas, of type (HTMLCanvasElement or OffscreenCanvas), readonly

The canvas this context was created from.

[[configuration]], of type GPUCanvasConfiguration?, initially null

The options this context is currently configured with.

null if the context has not been configured or has been unconfigured.

[[textureDescriptor]], of type GPUTextureDescriptor?, initially null

The currently configured texture descriptor, derived from the [[configuration]] and canvas.

null if the context has not been configured or has been unconfigured.

[[drawingBuffer]], an image, initially a transparent black image with the same size as the canvas

The drawing buffer is the working-copy image data of the canvas. It is exposed as writable by [[currentTexture]] (returned by getCurrentTexture()).

The drawing buffer is used to get a copy of the image contents of a context, which occurs when the canvas is displayed or otherwise read. It may be transparent, even if [[configuration]].alphaMode is "opaque". The alphaMode only affects the result of the "get a copy of the image contents of a context" algorithm.

The drawing buffer outlives the [[currentTexture]] and contains the previously-rendered contents even after the canvas has been presented. It is only cleared in Replace the drawing buffer.

Any time the drawing buffer is read, implementations must ensure that all previously submitted work (e.g. queue submissions) have completed writing to it via [[currentTexture]].

[[currentTexture]], of type GPUTexture?, initially null

The GPUTexture to draw into for the current frame. It exposes a writable view onto the underlying [[drawingBuffer]]. getCurrentTexture() populates this slot if null, then returns it.

In the steady-state of a visible canvas, any changes to the drawing buffer made through the currentTexture get presented when updating the rendering of a WebGPU canvas. At or before that point, the texture is also destroyed and [[currentTexture]] is set to to null, signalling that a new one is to be created by the next call to getCurrentTexture().

Destroying the currentTexture has no effect on the drawing buffer contents; it only terminates write-access to the drawing buffer early. During the same frame, getCurrentTexture() continues returning the same destroyed texture.

Expire the current texture sets the currentTexture to null. It is called by configure(), resizing the canvas, presentation, transferToImageBitmap(), and others.

[[lastPresentedImage]], of type (readonly image)?, initially null

The image most recently presented for this canvas in "updating the rendering of a WebGPU canvas". If the device is lost or destroyed, this image may be used as a fallback in "get a copy of the image contents of a context" in order to prevent the canvas from going blank.

Note: This property only needs to exist in implementations which implement the fallback, which is optional.

GPUCanvasContext has the following methods:

configure(configuration)

Configures the context for this canvas. This clears the drawing buffer to transparent black (in Replace the drawing buffer).

Called on: GPUCanvasContext this.

Arguments:

Arguments for the GPUCanvasContext.configure(configuration) method.
Parameter Type Nullable Optional Description
configuration GPUCanvasConfiguration Desired configuration for the context.

Returns: undefined

Content timeline steps:

  1. Let device be configuration.device.

  2. ? Validate texture format required features of configuration.format with device.[[device]].

  3. ? Validate texture format required features of each element of configuration.viewFormats with device.[[device]].

  4. If Supported context formats does not contain configuration.format, throw a TypeError.

  5. Let descriptor be the GPUTextureDescriptor for the canvas and configuration(this.canvas, configuration).

  6. Set this.[[configuration]] to configuration.

    NOTE:
    This spec requires supporting HDR via the toneMapping option. If a user agent only supports toneMapping: "standard", then the toneMapping member should not exist in GPUCanvasConfiguration, so it will not exist on the object returned by getConfiguration() and will not be accessed by configure()). This allows websites to detect feature support.
  7. Set this.[[textureDescriptor]] to descriptor.

  8. Replace the drawing buffer of this.

  9. Issue the subsequent steps on the Device timeline of device.

Device timeline steps:
  1. If any of the following requirements are unmet, generate a validation error and return.

    Note: This early validation remains valid until the next configure() call, except for validation of the size, which changes when the canvas is resized.

unconfigure()

Removes the context configuration. Destroys any textures produced while configured.

Called on: GPUCanvasContext this.

Returns: undefined

Content timeline steps:

  1. Set this.[[configuration]] to null.

  2. Set this.[[textureDescriptor]] to null.

  3. Replace the drawing buffer of this.

getConfiguration()

Returns the context configuration.

Called on: GPUCanvasContext this.

Returns: GPUCanvasConfiguration or null

Content timeline steps:

  1. Let configuration be a copy of this.[[configuration]].

  2. Return configuration.

NOTE:
In scenarios where getConfiguration() shows that toneMapping is implemented and the dynamic-range media query indicates HDR support, then WebGPU canvas should render content using the full HDR range instead of clamping values to the SDR range of the HDR display.
getCurrentTexture()

Get the GPUTexture that will be composited to the document by the GPUCanvasContext next.

NOTE:
An application should call getCurrentTexture() in the same task that renders to the canvas texture. Otherwise, the texture could get destroyed by these steps before the application is finished rendering to it.

The expiry task (defined below) is optional to implement. Even if implemented, task source priority is not normatively defined, so may happen as early as the next task, or as late as after all other task sources are empty (see automatic expiry task source). Expiry is only guaranteed when a visible canvas is displayed (updating the rendering of a WebGPU canvas) and in other callers of "Expire the current texture".

Called on: GPUCanvasContext this.

Returns: GPUTexture

Content timeline steps:

  1. If this.[[configuration]] is null, throw an InvalidStateError and return.

  2. Assert this.[[textureDescriptor]] is not null.

  3. Let device be this.[[configuration]].device.

  4. If this.[[currentTexture]] is null:

    1. Replace the drawing buffer of this.

    2. Set this.[[currentTexture]] to the result of calling device.createTexture() with this.[[textureDescriptor]], except with the GPUTexture’s underlying storage pointing to this.[[drawingBuffer]].

      Note: If the texture can’t be created (e.g. due to validation failure or out-of-memory), this generates and error and returns an invalidated GPUTexture. Some validation here is redundant with that done in configure(). Implementations must not skip this redundant validation.

  5. Optionally, queue an automatic expiry task with device device and the following steps:

    1. Expire the current texture of this.

      Note: If this already happened when updating the rendering of a WebGPU canvas, it has no effect.

  6. Return this.[[currentTexture]].

Note: The same GPUTexture object will be returned by every call to getCurrentTexture() until "Expire the current texture" runs, even if that GPUTexture is destroyed, failed validation, or failed to allocate.

To get a copy of the image contents of a context:

Arguments:

Returns: image contents

Content timeline steps:

  1. Let snapshot be a transparent black image of the same size as context.canvas.

  2. Let configuration be context.[[configuration]].

  3. If configuration is null:

    1. Return snapshot.

    Note: The configuration will be null if the context has not been configured or has been unconfigured. This is identical to the behavior when the canvas has no context.

  4. Ensure that all submitted work items (e.g. queue submissions) have completed writing to the image (via context.[[currentTexture]]).

  5. If configuration.device is found to be valid:

    1. Set snapshot to a copy of the context.[[drawingBuffer]].

    Else, if context.[[lastPresentedImage]] is not null:

    1. Optionally, set snapshot to a copy of context.[[lastPresentedImage]].

      Note: This is optional because the [[lastPresentedImage]] may no longer exist, depending on what caused device loss. Implementations may choose to skip it even if do they still have access to that image.

  6. Let alphaMode be configuration.alphaMode.

  7. If alphaMode is "opaque":
    1. Clear the alpha channel of snapshot to 1.0.

    2. Tag snapshot as being opaque.

    Note: If the [[currentTexture]], if any, has been destroyed (for example in "Expire the current texture"), the alpha channel is unobservable, and implementations may clear the alpha channel in-place.

    Otherwise:

    Tag snapshot with alphaMode.

  8. Tag snapshot with the colorSpace and toneMapping of configuration.

  9. Return snapshot.

To Replace the drawing buffer of a GPUCanvasContext context, run the following content timeline steps:
  1. Expire the current texture of context.

  2. Let configuration be context.[[configuration]].

  3. Set context.[[drawingBuffer]] to a transparent black image of the same size as context.canvas.

    • If configuration is null, the drawing buffer is tagged with the color space "srgb". In this case, the drawing buffer will remain blank until the context is configured.

    • If not, the drawing buffer has the specified configuration.format and is tagged with the specified configuration.colorSpace and configuration.toneMapping.

    Note: configuration.alphaMode is ignored until "get a copy of the image contents of a context".

    NOTE:
    A newly replaced drawing buffer image behaves as if it is cleared to transparent black, but, like after "discard", an implementation can clear it lazily only if it becomes necessary.

    Note: This will often be a no-op, if the drawing buffer is already cleared and has the correct configuration.

To Expire the current texture of a GPUCanvasContext context, run the following content timeline steps:
  1. If context.[[currentTexture]] is not null:

    1. Call context.[[currentTexture]].destroy() (without destroying context.[[drawingBuffer]]) to terminate write access to the image.

    2. Set context.[[currentTexture]] to null.

21.3. HTML Specification Hooks

The following algorithms "hook" into algorithms in the HTML specification, and must run at the specified points.

When the "bitmap" is read from an HTMLCanvasElement or OffscreenCanvas with a GPUCanvasContext context, run the following content timeline steps:
  1. Return a copy of the image contents of context.

NOTE:
This occurs in many places, including:

If alphaMode is "opaque", this incurs a clear of the alpha channel. Implementations may skip this step when they are able to read or display images in a way that ignores the alpha channel.

If an application needs a canvas only for interop (not presentation), avoid "opaque" if it is not needed.

When updating the rendering of a WebGPU canvas (an HTMLCanvasElement or an OffscreenCanvas with a placeholder canvas element) with a GPUCanvasContext context, which occurs before getting the canvas’s image contents, in the following sub-steps of the event loop processing model:

Note: Service and Shared workers do not have "update the rendering" steps because they cannot render to user-visible canvases. requestAnimationFrame() is not exposed in ServiceWorkerGlobalScope and SharedWorkerGlobalScope, and OffscreenCanvases from transferControlToOffscreen() cannot be sent to these workers.

Run the following content timeline steps:

  1. Expire the current texture of context.

    Note: If this already happened in the task queued by getCurrentTexture(), it has no effect.

  2. Set context.[[lastPresentedImage]] to context.[[drawingBuffer]].

    Note: This is just a reference, not a copy; the drawing buffer’s contents can’t change in-place after the current texture has expired.

Note: This does not happen for standalone OffscreenCanvases (created by new OffscreenCanvas()).

transferToImageBitmap from WebGPU:

When transferToImageBitmap() is called on a canvas with GPUCanvasContext context, after creating an ImageBitmap from the canvas’s bitmap, run the following content timeline steps:

  1. Replace the drawing buffer of context.

Note: This makes transferToImageBitmap() equivalent to "moving" (and possibly alpha-clearing) the image contents into the ImageBitmap, without a copy.

21.4. GPUCanvasConfiguration

The supported context formats are the set of GPUTextureFormats: «"bgra8unorm", "rgba8unorm", "rgba16float"». These formats must be supported when specified as a GPUCanvasConfiguration.format regardless of the given GPUCanvasConfiguration.device.

Note: Canvas configuration cannot use srgb formats like "bgra8unorm-srgb". Instead, use the non-srgb equivalent ("bgra8unorm"), specify the srgb format in the viewFormats, and use createView() to create a view with an srgb format.

enum GPUCanvasAlphaMode {
    "opaque",
    "premultiplied",
};

enum GPUCanvasToneMappingMode {
    "standard",
    "extended",
};

dictionary GPUCanvasToneMapping {
  GPUCanvasToneMappingMode mode = "standard";
};

dictionary GPUCanvasConfiguration {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.RENDER_ATTACHMENT
    sequence<GPUTextureFormat> viewFormats = [];
    PredefinedColorSpace colorSpace = "srgb";
    GPUCanvasToneMapping toneMapping = {};
    GPUCanvasAlphaMode alphaMode = "opaque";
};

GPUCanvasConfiguration has the following members:

device, of type GPUDevice

The GPUDevice that textures returned by getCurrentTexture() will be compatible with.

format, of type GPUTextureFormat

The format that textures returned by getCurrentTexture() will have. Must be one of the Supported context formats.

usage, of type GPUTextureUsageFlags, defaulting to 0x10

The usage that textures returned by getCurrentTexture() will have. RENDER_ATTACHMENT is the default, but is not automatically included if the usage is explicitly set. Be sure to include RENDER_ATTACHMENT when setting a custom usage if you wish to use textures returned by getCurrentTexture() as color targets for a render pass.

viewFormats, of type sequence<GPUTextureFormat>, defaulting to []

The formats that views created from textures returned by getCurrentTexture() may use.

colorSpace, of type PredefinedColorSpace, defaulting to "srgb"

The color space that values written into textures returned by getCurrentTexture() should be displayed with.

toneMapping, of type GPUCanvasToneMapping, defaulting to {}

The tone mapping determines how the content of textures returned by getCurrentTexture() are to be displayed.

Note: If an implementation doesn’t support HDR WebGPU canvases, it should also not expose this member, to allow for feature detection. See getConfiguration().

alphaMode, of type GPUCanvasAlphaMode, defaulting to "opaque"

Determines the effect that alpha values will have on the content of textures returned by getCurrentTexture() when read, displayed, or used as an image source.

Configure a GPUCanvasContext to be used with a specific GPUDevice, using the preferred format for this context:
const canvas = document.createElement('canvas');
const context = canvas.getContext('webgpu');

context.configure({
    device: gpuDevice,
    format: navigator.gpu.getPreferredCanvasFormat(),
});
The GPUTextureDescriptor for the canvas and configuration( (HTMLCanvasElement or OffscreenCanvas) canvas, GPUCanvasConfiguration configuration) is a GPUTextureDescriptor with the following members:

and other members set to their defaults.

canvas.width refers to HTMLCanvasElement.width or OffscreenCanvas.width. canvas.height refers to HTMLCanvasElement.height or OffscreenCanvas.height.

21.4.1. Canvas Color Space

During presentation, the color values in the canvas are converted to the color space of the screen.

The toneMapping determines the handling of values outside of the [0, 1] interval in the color space of the screen.

21.4.2. Canvas Context sizing

All canvas configuration is set in configure() except for the resolution of the canvas, which is set by the canvas’s width and height.

Note: Like WebGL and 2d canvas, resizing a WebGPU canvas loses the current contents of the drawing buffer. In WebGPU, it does so by replacing the drawing buffer.

When an HTMLCanvasElement or OffscreenCanvas canvas with a GPUCanvasContext context has its width or height attributes set, update the canvas size by running the following content timeline steps:
  1. Replace the drawing buffer of context.

  2. Let configuration be context.[[configuration]]

  3. If configuration is not null:

    1. Set context.[[textureDescriptor]] to the GPUTextureDescriptor for the canvas and configuration(canvas, configuration).

Note: This may result in a GPUTextureDescriptor which exceeds the maxTextureDimension2D of the device. In this case, validation will fail inside getCurrentTexture().

Note: This algorithm is run any time the canvas width or height attributes are set, even if their value is not changed.

21.5. GPUCanvasToneMappingMode

This enum specifies how color values are displayed to the screen.

"standard"

Color values within the standard dynamic range of the screen are unchanged, and all other color values are projected to the standard dynamic range of the screen.

Note: This projection is often accomplished by clamping color values in the color space of the screen to the [0, 1] interval.

For example, suppose that the value (1.035, -0.175, -0.140) is written to an 'srgb' canvas.

If this is presented to an sRGB screen, then this will be converted to sRGB (which is a no-op, because the canvas is sRGB), then projected into the display’s space. Using component-wise clamping, this results in the sRGB value (1.0, 0.0, 0.0).

If this is presented to a Display P3 screen, then this will be converted to the value (0.948, 0.106, 0.01) in the Display P3 color space, and no clamping will be needed.

"extended"

Color values in the extended dynamic range of the screen are unchanged, and all other color values are projected to the extended dynamic range of the screen.

Note: This projection is often accomplished by clamping color values in the color space of the screen to the interval of values that the screen is capable of displaying, which may include values greater than 1.

For example, suppose that the value (2.5, -0.15, -0.15) is written to an 'srgb' canvas.

If this is presented to an sRGB screen that is capable of displaying values in the [0, 4] interval in sRGB space, then this will be converted to sRGB (which is a no-op, because the canvas is sRGB), then projected into the display’s space. If using component-wise clamping, this results in the sRGB value (2.5, 0.0, 0.0).

If this is presented to a Display P3 screen that is capable of displaying values in the [0, 2] interval in Display P3 space, then this will be converted to the value (2.3, 0.545, 0.386) in the Display P3 color space, then projected into the display’s space. If using component-wise clamping, this results in the Display P3 value (2.0, 0.545, 0.386).

21.6. GPUCanvasAlphaMode

This enum selects how the contents of the canvas will be interpreted when read, when displayed to the screen or used as an image source (in drawImage, toDataURL, etc.)

Below, src is a value in the canvas texture, and dst is an image that the canvas is being composited into (e.g. an HTML page rendering, or a 2D canvas).

"opaque"

Read RGB as opaque and ignore alpha values. If the content is not already opaque, the alpha channel is cleared to 1.0 in "get a copy of the image contents of a context".

"premultiplied"

Read RGBA as premultiplied: color values are premultiplied by their alpha value. 100% red at 50% alpha is [0.5, 0, 0, 0.5].

If the canvas texture contains out-of-gamut premultiplied RGBA values at the time the canvas contents are read, the behavior depends on whether the canvas is:

used as an image source

Values are preserved, as described in color space conversion.

displayed to the screen

Compositing results are undefined.

Note: This is true even if color space conversion would produce in-gamut values before compositing, because the intermediate format for compositing is not specified.

22. Errors & Debugging

During the normal course of operation of WebGPU, errors are raised via dispatch error.

After a device is lost, errors are no longer surfaced, where possible. After this point, implementations do not need to run validation or error tracking:

22.1. Fatal Errors

enum GPUDeviceLostReason {
    "unknown",
    "destroyed",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUDeviceLostInfo {
    readonly attribute GPUDeviceLostReason reason;
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

GPUDevice has the following additional attributes:

lost, of type Promise<GPUDeviceLostInfo>, readonly

A slot-backed attribute holding a promise which is created with the device, remains pending for the lifetime of the device, then resolves when the device is lost.

Upon initialization, it is set to a new promise.

22.2. GPUError

[Exposed=(Window, Worker), SecureContext]
interface GPUError {
    readonly attribute DOMString message;
};

GPUError is the base interface for all errors surfaced from popErrorScope() and the uncapturederror event.

Errors must only be generated for operations that explicitly state the conditions one may be generated under in their respective algorithms, and the subtype of error that is generated.

No errors are generated from a device which is lost. See § 22 Errors & Debugging.

Note: GPUError may gain new subtypes in future versions of this spec. Applications should handle this possibility, using only the error’s message when possible, and specializing using instanceof. Use error.constructor.name when it’s necessary to serialize an error (e.g. into JSON, for a debug report).

GPUError has the following immutable properties:

message, of type DOMString, readonly

A human-readable, localizable text message providing information about the error that occurred.

Note: This message is generally intended for application developers to debug their applications and capture information for debug reports, not to be surfaced to end-users.

Note: User agents should not include potentially machine-parsable details in this message, such as free system memory on "out-of-memory" or other details about the conditions under which memory was exhausted.

Note: The message should follow the best practices for language and direction information. This includes making use of any future standards which may emerge regarding the reporting of string language and direction metadata.

Editorial note: At the time of this writing, no language/direction recommendation is available that provides compatibility and consistency with legacy APIs, but when there is, adopt it formally.

[Exposed=(Window, Worker), SecureContext]
interface GPUValidationError
        : GPUError {
    constructor(DOMString message);
};

GPUValidationError is a subtype of GPUError which indicates that an operation did not satisfy all validation requirements. Validation errors are always indicative of an application error, and is expected to fail the same way across all devices assuming the same [[features]] and [[limits]] are in use.

To generate a validation error for GPUDevice device, run the following steps:

Device timeline steps:

  1. Let error be a new GPUValidationError with an appropriate error message.

  2. Dispatch error error to device.

[Exposed=(Window, Worker), SecureContext]
interface GPUOutOfMemoryError
        : GPUError {
    constructor(DOMString message);
};

GPUOutOfMemoryError is a subtype of GPUError which indicates that there was not enough free memory to complete the requested operation. The operation may succeed if attempted again with a lower memory requirement (like using smaller texture dimensions), or if memory used by other resources is released first.

To generate an out-of-memory error for GPUDevice device, run the following steps:

Device timeline steps:

  1. Let error be a new GPUOutOfMemoryError with an appropriate error message.

  2. Dispatch error error to device.

[Exposed=(Window, Worker), SecureContext]
interface GPUInternalError
        : GPUError {
    constructor(DOMString message);
};

GPUInternalError is a subtype of GPUError which indicates than an operation failed for a system or implementation-specific reason even when all validation requirements have been satisfied. For example, the operation may exceed the capabilities of the implementation in a way not easily captured by the supported limits. The same operation may succeed on other devices or under difference circumstances.

To generate an internal error for GPUDevice device, run the following steps:

Device timeline steps:

  1. Let error be a new GPUInternalError with an appropriate error message.

  2. Dispatch error error to device.

22.3. Error Scopes

A GPU error scope captures GPUErrors that were generated while the GPU error scope was current. Error scopes are used to isolate errors that occur within a set of WebGPU calls, typically for debugging purposes or to make an operation more fault tolerant.

GPU error scope has the following device timeline properties:

[[errors]], of type list<GPUError>, initially []

The GPUErrors, if any, observed while the GPU error scope was current.

[[filter]], of type GPUErrorFilter

Determines what type of GPUError this GPU error scope observes.

enum GPUErrorFilter {
    "validation",
    "out-of-memory",
    "internal",
};

partial interface GPUDevice {
    undefined pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

GPUErrorFilter defines the type of errors that should be caught when calling pushErrorScope():

"validation"

Indicates that the error scope will catch a GPUValidationError.

"out-of-memory"

Indicates that the error scope will catch a GPUOutOfMemoryError.

"internal"

Indicates that the error scope will catch a GPUInternalError.

GPUDevice has the following device timeline properties:

[[errorScopeStack]], of type stack<GPU error scope>

A stack of GPU error scopes that have been pushed to the GPUDevice.

The current error scope for a GPUError error and GPUDevice device is determined by issuing the following steps to the device timeline of device:

Device timeline steps:

  1. If error is an instance of:

    GPUValidationError

    Let type be "validation".

    GPUOutOfMemoryError

    Let type be "out-of-memory".

    GPUInternalError

    Let type be "internal".

  2. Let scope be the last item of device.[[errorScopeStack]].

  3. While scope is not undefined:

    1. If scope.[[filter]] is type, return scope.

    2. Set scope to the previous item of device.[[errorScopeStack]].

  4. Return undefined.

To dispatch an error GPUError error on GPUDevice device, run the following device timeline steps:
Device timeline steps:

Note: No errors are generated from a device which is lost. If this algorithm is called while device is lost, it will not be observable to the application. See § 22 Errors & Debugging.

  1. Let scope be the current error scope for error and device.

  2. If scope is not undefined:

    1. Append error to scope.[[errors]].

    2. Return.

  3. Otherwise issue the following steps to the content timeline:

Content timeline steps:
  1. If the user agent chooses, queue a global task for GPUDevice device with the following steps:

    1. Fire a GPUUncapturedErrorEvent named "uncapturederror" on device, with an error of error.

Note: After dispatching the event, user agents should surface uncaptured errors to developers, for example as warnings in the browser’s developer console, unless the event’s defaultPrevented is true. In other words, calling preventDefault() on the event should silence the console warning.

Note: The user agent may choose to throttle or limit the number of GPUUncapturedErrorEvents that a GPUDevice can raise to prevent an excessive amount of error handling or logging from impacting performance.

pushErrorScope(filter)

Pushes a new GPU error scope onto the [[errorScopeStack]] for this.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.pushErrorScope(filter) method.
Parameter Type Nullable Optional Description
filter GPUErrorFilter Which class of errors this error scope observes.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. Let scope be a new GPU error scope.

  2. Set scope.[[filter]] to filter.

  3. Push scope onto this.[[errorScopeStack]].

popErrorScope()

Pops a GPU error scope off the [[errorScopeStack]] for this and resolves to any GPUError observed by the error scope, or null if none.

There is no guarantee of the ordering of promise resolution.

Called on: GPUDevice this.

Returns: Promise<GPUError?>

Content timeline steps:

  1. Let contentTimeline be the current Content timeline.

  2. Let promise be a new promise.

  3. Issue the check steps on the Device timeline of this.

  4. Return promise.

Device timeline check steps:
  1. If this is lost:

    1. Issue the following steps on contentTimeline:

      Content timeline steps:
      1. Resolve promise with null.

    2. Return.

    Note: No errors are generated from a device which is lost. See § 22 Errors & Debugging.

  2. If any of the following requirements are unmet:

    Then issue the following steps on contentTimeline and return:

    Content timeline steps:
    1. Reject promise with an OperationError.

  3. Let scope be the result of popping an item off of this.[[errorScopeStack]].

  4. Let error be any one of the items in scope.[[errors]], or null if there are none.

    For any two errors E1 and E2 in the list, if E2 was caused by E1, E2 should not be the one selected.

    Note: For example, if E1 comes from t = createTexture(), and E2 comes from t.createView() because t was invalid, E1 should be be preferred since it will be easier for a developer to understand what went wrong. Since both of these are GPUValidationErrors, the only difference will be in the message field, which is meant only to be read by humans anyway.

  5. At an unspecified point now or in the future, issue the subsequent steps on contentTimeline.

    Note: By allowing popErrorScope() calls to resolve in any order, with any of the errors observed by the scope, this spec allows validation to complete out of order, as long as any state observations are made at the appropriate point in adherence to this spec. For example, this allows implementations to perform shader compilation, which depends only on non-stateful inputs, to be completed on a background thread in parallel with other device-timeline work, and report any resulting errors later.

Content timeline steps:
  1. Resolve promise with error.

Using error scopes to capture validation errors from a GPUDevice operation that may fail:
gpuDevice.pushErrorScope('validation');

let sampler = gpuDevice.createSampler({
    maxAnisotropy: 0, // Invalid, maxAnisotropy must be at least 1.
});

gpuDevice.popErrorScope().then((error) => {
    if (error) {
        // There was an error creating the sampler, so discard it.
        sampler = null;
        console.error(`An error occured while creating sampler: ${error.message}`);
    }
});
NOTE:
Error scopes can encompass as many commands as needed. The number of commands an error scope covers will generally be correlated to what sort of action the application intends to take in response to an error occuring.

For example: An error scope that only contains the creation of a single resource, such as a texture or buffer, can be used to detect failures such as out of memory conditions, in which case the application may try freeing some resources and trying the allocation again.

Error scopes do not identify which command failed, however. So, for instance, wrapping all the commands executed while loading a model in a single error scope will not offer enough granularity to determine if the issue was due to memory constraints. As a result freeing resources would usually not be a productive response to a failure of that scope. A more appropriate response would be to allow the application to fall back to a different model or produce a warning that the model could not be loaded. If responding to memory constraints is desired, the operations allocating memory can always be wrapped in a smaller nested error scope.

22.4. Telemetry

When a GPUError is generated that is not observed by any GPU error scope, the user agent may fire an event named uncapturederror at a GPUDevice using GPUUncapturedErrorEvent.

Note: uncapturederror events are intended to be used for telemetry and reporting unexpected errors. They may not be dispatched for all uncaptured errors (for example, there may be a limit on the number of errors surfaced), and should not be used for handling known error cases that may occur during normal operation of an application. Prefer using pushErrorScope() and popErrorScope() in those cases.

[Exposed=(Window, Worker), SecureContext]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};

GPUUncapturedErrorEvent has the following attributes:

error, of type GPUError, readonly

A slot-backed attribute holding an object representing the error that was uncaptured. This has the same type as errors returned by popErrorScope().

partial interface GPUDevice {
    attribute EventHandler onuncapturederror;
};

GPUDevice has the following content timeline properties:

onuncapturederror, of type EventHandler

An event handler IDL attribute for the uncapturederror event type.

Listening for uncaptured errors from a GPUDevice:
gpuDevice.addEventListener('uncapturederror', (event) => {
    // Re-surface the error, because adding an event listener may silence console logs.
    console.error('A WebGPU error was not captured:', event.error);

    myEngineDebugReport.uncapturedErrors.push({
        type: event.error.constructor.name,
        message: event.error.message,
    });
});

23. Detailed Operations

This section describes the details of various GPU operations.

23.1. Computing

Computing operations provide direct access to GPU’s programmable hardware. Compute shaders do not have shader stage inputs or outputs; their results are side effects from writing data into storage bindings bound either as GPUBufferBindingLayout with GPUBufferBindingType "storage" or as GPUStorageTextureBindingLayout. These operations are encoded within GPUComputePassEncoder as:

The main compute algorithm:

compute(descriptor, drawCall, state)

Arguments:

  1. Let computeInvocations be an empty list.

  2. Let computeStage be descriptor.compute.

  3. Let workgroupSize be the computed workgroup size for computeStage.entryPoint after applying computeStage.constants to computeStage.module.

  4. For workgroupX in range [0, dispatchCall.workgroupCountX]:

    1. For workgroupY in range [0, dispatchCall.workgroupCountY]:

      1. For workgroupZ in range [0, dispatchCall.workgroupCountZ]:

        1. For localX in range [0, workgroupSize.x]:

          1. For localY in range [0, workgroupSize.y]:

            1. For localZ in range [0, workgroupSize.y]:

              1. Let invocation be { computeStage, workgroupX, workgroupY, workgroupZ, localX, localY, localZ }

              2. Append invocation to computeInvocations.

  5. For every invocation in computeInvocations, in any order the device chooses, including in parallel:

    1. Set the shader builtins:

      • Set the num_workgroups builtin, if any, to (
        dispatchCall.workgroupCountX,
        dispatchCall.workgroupCountY,
        dispatchCall.workgroupCountZ
        )

      • Set the workgroup_id builtin, if any, to (
        invocation.workgroupX,
        invocation.workgroupY,
        invocation.workgroupZ
        )

      • Set the local_invocation_id builtin, if any, to (
        invocation.localX,
        invocation.localY,
        invocation.localZ
        )

      • Set the global_invocation_id builtin, if any, to (
        invocation.workgroupX * workgroupSize.x + invocation.localX,
        invocation.workgroupY * workgroupSize.y + invocation.localY,
        invocation.workgroupZ * workgroupSize.z + invocation.localZ
        )
        .

      • Set the local_invocation_index builtin, if any, to invocation.localX + (invocation.localY * workgroupSize.x) + (invocation.localZ * workgroupSize.x * workgroupSize.y)

    2. Invoke the compute shader entry point described by invocation.computeStage.

Note: Shader invocations have no guaranteed order, and will generally run in parallel according to device capabilities. Developers should not assume that any given invocation or workgroup will complete before any other one is started. Some devices may appear to execute in a consistent order, but this behavior should not be relied on as it will not perform identically across all devices. Shaders that require synchronization across invocations must use Synchronization Built-in Functions to coordinate execution.

The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.

23.2. Rendering

Rendering is done by a set of GPU operations that are executed within GPURenderPassEncoder, and result in modifications of the texture data, viewed by the render pass attachments. These operations are encoded with:

Note: rendering is the traditional use of GPUs, and is supported by multiple fixed-function blocks in hardware.

The main rendering algorithm:

render(pipeline, drawCall, state)

Arguments:

  1. Let descriptor be pipeline.[[descriptor]].

  2. Resolve indices. See § 23.2.1 Index Resolution.

    Let vertexList be the result of resolve indices(drawCall, state).

  3. Process vertices. See § 23.2.2 Vertex Processing.

    Execute process vertices(vertexList, drawCall, descriptor.vertex, state).

  4. Assemble primitives. See § 23.2.3 Primitive Assembly.

    Execute assemble primitives(vertexList, drawCall, descriptor.primitive).

  5. Clip primitives. See § 23.2.4 Primitive Clipping.

    Let primitiveList be the result of this stage.

  6. Rasterize. See § 23.2.5 Rasterization.

    Let rasterizationList be the result of rasterize(primitiveList, state).

  7. Process fragments. See § 23.2.6 Fragment Processing.

    Gather a list of fragments, resulting from executing process fragment(rasterPoint, descriptor, state) for each rasterPoint in rasterizationList.

  8. Write pixels. See § 23.2.7 Output Merging.

    For each non-null fragment of fragments:

23.2.1. Index Resolution

At the first stage of rendering, the pipeline builds a list of vertices to process for each instance.

resolve indices(drawCall, state)

Arguments:

Returns: list of integer indices.

  1. Let vertexIndexList be an empty list of indices.

  2. If drawCall is an indexed draw call:

    1. Initialize the vertexIndexList with drawCall.indexCount integers.

    2. For i in range 0 .. drawCall.indexCount (non-inclusive):

      1. Let relativeVertexIndex be fetch index(i + drawCall.firstIndex, state.[[index_buffer]]).

      2. If relativeVertexIndex has the special value "out of bounds", return the empty list.

        Note: Implementations may choose to display a warning when this occurs, especially when it is easy to detect (like in non-indirect indexed draw calls).

      3. Append drawCall.baseVertex + relativeVertexIndex to the vertexIndexList.

  3. Otherwise:

    1. Initialize the vertexIndexList with drawCall.vertexCount integers.

    2. Set each vertexIndexList item i to the value drawCall.firstVertex + i.

  4. Return vertexIndexList.

Note: in the case of indirect draw calls, the indexCount, vertexCount, and other properties of drawCall are read from the indirect buffer instead of the draw command itself.

fetch index(i, buffer, offset, format)

Arguments:

Returns: unsigned integer or "out of bounds"

  1. Let indexSize be defined by the state.[[index_format]]:

    "uint16"

    2

    "uint32"

    4

  2. If state.[[index_buffer_offset]] + |i + 1| × indexSize > state.[[index_buffer_size]], return the special value "out of bounds".

  3. Interpret the data in state.[[index_buffer]], starting at offset state.[[index_buffer_offset]] + i × indexSize, of size indexSize bytes, as an unsigned integer and return it.

23.2.2. Vertex Processing

Vertex processing stage is a programmable stage of the render pipeline that processes the vertex attribute data, and produces clip space positions for § 23.2.4 Primitive Clipping, as well as other data for the § 23.2.6 Fragment Processing.

process vertices(vertexIndexList, drawCall, desc, state)

Arguments:

Each vertex vertexIndex in the vertexIndexList, in each instance of index rawInstanceIndex, is processed independently. The rawInstanceIndex is in range from 0 to drawCall.instanceCount - 1, inclusive. This processing happens in parallel, and any side effects, such as writes into GPUBufferBindingType "storage" bindings, may happen in any order.

  1. Let instanceIndex be rawInstanceIndex + drawCall.firstInstance.

  2. For each non-null vertexBufferLayout in the list of desc.buffers:

    1. Let i be the index of the buffer layout in this list.

    2. Let vertexBuffer, vertexBufferOffset, and vertexBufferBindingSize be the buffer, offset, and size at slot i of state.[[vertex_buffers]].

    3. Let vertexElementIndex be dependent on vertexBufferLayout.stepMode:

      "vertex"

      vertexIndex

      "instance"

      instanceIndex

    4. Let drawCallOutOfBounds be false.

    5. For each attributeDesc in vertexBufferLayout.attributes:

      1. Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.arrayStride + attributeDesc.offset.

      2. If attributeOffset + byteSize(attributeDesc.format) > vertexBufferOffset + vertexBufferBindingSize:

        1. Set drawCallOutOfBounds to true.

        2. Optionally (implementation-defined), empty vertexIndexList and return, cancelling the draw call.

          Note: This allows implementations to detect out-of-bounds values in the index buffer before issuing a draw call, instead of using invalid memory reference behavior.

    6. For each attributeDesc in vertexBufferLayout.attributes:

      1. If drawCallOutOfBounds is true:

        1. Load the attribute data according to WGSL’s invalid memory reference behavior, from vertexBuffer.

          Note: Invalid memory reference allows several behaviors, including actually loading the "correct" result for an attribute that is in-bounds, even when the draw-call-wide drawCallOutOfBounds is true.

        Otherwise:

        1. Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.arrayStride + attributeDesc.offset.

        2. Load the attribute data of format attributeDesc.format from vertexBuffer starting at offset attributeOffset. The components are loaded in the order x, y, z, w from buffer memory.

      2. Convert the data into a shader-visible format, according to channel formats rules.

        An attribute of type "snorm8x2" and byte values of [0x70, 0xD0] will be converted to vec2<f32>(0.88, -0.38) in WGSL.
      3. Adjust the data size to the shader type:

        • if both are scalar, or both are vectors of the same dimensionality, no adjustment is needed.

        • if data is vector but the shader type is scalar, then only the first component is extracted.

        • if both are vectors, and data has a higher dimension, the extra components are dropped.

          An attribute of type "float32x3" and value vec3<f32>(1.0, 2.0, 3.0) will exposed to the shader as vec2<f32>(1.0, 2.0) if a 2-component vector is expected.
        • if the shader type is a vector of higher dimensionality, or the data is a scalar, then the missing components are filled from vec4<*>(0, 0, 0, 1) value.

          An attribute of type "sint32" and value 5 will be exposed to the shader as vec4<i32>(5, 0, 0, 1) if a 4-component vector is expected.
      4. Bind the data to vertex shader input location attributeDesc.shaderLocation.

  3. For each GPUBindGroup group at index in state.[[bind_groups]]:

    1. For each resource GPUBindingResource in the bind group:

      1. Let entry be the corresponding GPUBindGroupLayoutEntry for this resource.

      2. If entry.visibility includes VERTEX:

  4. Set the shader builtins:

    • Set the vertex_index builtin, if any, to vertexIndex.

    • Set the instance_index builtin, if any, to instanceIndex.

  5. Invoke the vertex shader entry point described by desc.

    Note: The target platform caches the results of vertex shader invocations. There is no guarantee that any vertexIndex that repeats more than once will result in multiple invocations. Similarly, there is no guarantee that a single vertexIndex will only be processed once.

    The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.

23.2.3. Primitive Assembly

Primitives are assembled by a fixed-function stage of GPUs.

assemble primitives(vertexIndexList, drawCall, desc)

Arguments:

For each instance, the primitives get assembled from the vertices that have been processed by the shaders, based on the vertexIndexList.

  1. First, if the primitive topology is a strip, (which means that desc.stripIndexFormat is not undefined) and the drawCall is indexed, the vertexIndexList is split into sub-lists using the maximum value of desc.stripIndexFormat as a separator.

    Example: a vertexIndexList with values [1, 2, 65535, 4, 5, 6] of type "uint16" will be split in sub-lists [1, 2] and [4, 5, 6].

  2. For each of the sub-lists vl, primitive generation is done according to the desc.topology:

    "line-list"

    Line primitives are composed from (vl.0, vl.1), then (vl.2, vl.3), then (vl.4 to vl.5), etc. Each subsequent primitive takes 2 vertices.

    "line-strip"

    Line primitives are composed from (vl.0, vl.1), then (vl.1, vl.2), then (vl.2, vl.3), etc. Each subsequent primitive takes 1 vertex.

    "triangle-list"

    Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.3, vl.4, vl.5), then (vl.6, vl.7, vl.8), etc. Each subsequent primitive takes 3 vertices.

    "triangle-strip"

    Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.2, vl.1, vl.3), then (vl.2, vl.3, vl.4), then (vl.4, vl.3, vl.5), etc. Each subsequent primitive takes 1 vertices.

    Any incomplete primitives are dropped.

23.2.4. Primitive Clipping

Vertex shaders have to produce a built-in position (of type vec4<f32>), which denotes the clip position of a vertex in clip space coordinates.

Primitives are clipped to the clip volume, which, for any clip position p inside a primitive, is defined by the following inequalities:

When the "clip-distances" feature is enabled, this clip volume can be further restricted by user-defined half-spaces by declaring clip_distances in the output of vertex stage. Each value in the clip_distances array will be linearly interpolated across the primitive, and the portion of the primitive with interpolated distances less than 0 will be clipped.

If descriptor.primitive.unclippedDepth is true, depth clipping is not applied: the clip volume is not bounded in the z dimension.

A primitive passes through this stage unchanged if every one of its edges lie entirely inside the clip volume. If the edges of a primitives intersect the boundary of the clip volume, the intersecting edges are reconnected by new edges that lie along the boundary of the clip volume. For triangular primitives (descriptor.primitive.topology is "triangle-list" or "triangle-strip"), this reconnection may result in introduction of new vertices into the polygon, internally.

If a primitive intersects an edge of the clip volume’s boundary, the clipped polygon must include a point on this boundary edge.

If the vertex shader outputs other floating-point values (scalars and vectors), qualified with "perspective" interpolation, they also get clipped. The output values associated with a vertex that lies within the clip volume are unaffected by clipping. If a primitive is clipped, however, the output values assigned to vertices produced by clipping are clipped.

Considering an edge between vertices a and b that got clipped, resulting in the vertex c, let’s define t to be the ratio between the edge vertices: c.p = t × a.p + (1 − t) × b.p, where x.p is the output clip position of a vertex x.

For each vertex output value "v" with a corresponding fragment input, a.v and b.v would be the outputs for a and b vertices respectively. The clipped shader output c.v is produced based on the interpolation qualifier:

flat

Flat interpolation is unaffected, and is based on the provoking vertex, which is determined by the interpolation sampling mode declared in the shader. The output value is the same for the whole primitive, and matches the vertex output of the provoking vertex.

linear

The interpolation ratio gets adjusted against the perspective coordinates of the clip positions, so that the result of interpolation is linear in screen space.

perspective

The value is linearly interpolated in clip space, producing perspective-correct values.

The result of primitive clipping is a new set of primitives, which are contained within the clip volume.

23.2.5. Rasterization

Rasterization is the hardware processing stage that maps the generated primitives to the 2-dimensional rendering area of the framebuffer - the set of render attachments in the current GPURenderPassEncoder. This rendering area is split into an even grid of pixels.

The framebuffer coordinates start from the top-left corner of the render targets. Each unit corresponds exactly to one pixel. See § 3.3 Coordinate Systems for more information.

Rasterization determines the set of pixels affected by a primitive. In case of multi-sampling, each pixel is further split into descriptor.multisample.count samples. The standard sample patterns are as follows, with positions in framebuffer coordinates relative to the top-left corner of the pixel, such that the pixel ranges from (0, 0) to (1, 1):

multisample.count Sample positions
1 Sample 0: (0.5, 0.5)
4 Sample 0: (0.375, 0.125)
Sample 1: (0.875, 0.375)
Sample 2: (0.125, 0.625)
Sample 3: (0.625, 0.875)

Implementations must use the standard sample pattern for the given multisample.count when performing rasterization.

Let’s define a FragmentDestination to contain:

position

the 2D pixel position using framebuffer coordinates

sampleIndex

an integer in case § 23.2.10 Per-Sample Shading is active, or null otherwise

We’ll also use a notion of normalized device coordinates, or NDC. In this coordinate system, the viewport bounds range in X and Y from -1 to 1, and in Z from 0 to 1.

Rasterization produces a list of RasterizationPoints, each containing the following data:

destination

refers to FragmentDestination

coverageMask

refers to multisample coverage mask (see § 23.2.11 Sample Masking)

frontFacing

is true if it’s a point on the front face of a primitive

perspectiveDivisor

refers to interpolated 1.0 ÷ W across the primitive

depth

refers to the depth in viewport coordinates, i.e. between the [[viewport]] minDepth and maxDepth.

primitiveVertices

refers to the list of vertex outputs forming the primitive

barycentricCoordinates

refers to § 23.2.5.3 Barycentric coordinates

rasterize(primitiveList, state)

Arguments:

Returns: list of RasterizationPoint.

Each primitive in primitiveList is processed independently. However, the order of primitives affects later stages, such as depth/stencil operations and pixel writes.

  1. First, the clipped vertices are transformed into NDC - normalized device coordinates. Given the output position p, the NDC position and perspective divisor are:

    ndc(p) = vector(p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w)

    divisor(p) = 1.0 ÷ p.w

  2. Let vp be state.[[viewport]]. Map the NDC position n into viewport coordinates:

    • Compute framebuffer coordinates from the render target offset and size:

      framebufferCoords(n) = vector(vp.x + 0.5 × (n.x + 1) × vp.width, vp.y + 0.5 × (−n.y + 1) × vp.height)

    • Compute depth by linearly mapping [0,1] to the viewport depth range:

      depth(n) = vp.minDepth + n.z × ( vp.maxDepth - vp.minDepth )

  3. Let rasterizationPoints be the list of points, each having its attributes (divisor(p), framebufferCoords(n), depth(n), etc.) interpolated according to its position on the primitive, using the same interpolation as § 23.2.4 Primitive Clipping. If the attribute is user-defined (not a built-in output value) then the interpolation type specified by the @interpolate WGSL attribute is used.

  4. Proceed with a specific rasterization algorithm, depending on primitive.topology:

    "point-list"

    The point, if not filtered by § 23.2.4 Primitive Clipping, goes into § 23.2.5.1 Point Rasterization.

    "line-list" or "line-strip"

    The line cut by § 23.2.4 Primitive Clipping goes into § 23.2.5.2 Line Rasterization.

    "triangle-list" or "triangle-strip"

    The polygon produced in § 23.2.4 Primitive Clipping goes into § 23.2.5.4 Polygon Rasterization.

  5. Remove all the points rp from rasterizationPoints that have rp.destination.position outside of state.[[scissorRect]].

  6. Return rasterizationPoints.

23.2.5.1. Point Rasterization

A single FragmentDestination is selected within the pixel containing the framebuffer coordinates of the point.

The coverage mask depends on multi-sampling mode:

sample-frequency

coverageMask = 1 ≪ sampleIndex

pixel-frequency multi-sampling

coverageMask = 1 ≪ descriptor.multisample.count − 1

no multi-sampling

coverageMask = 1

23.2.5.2. Line Rasterization

The exact algorithm used for line rasterization is not defined, and may differ between implementations. For example, the line may be drawn using § 23.2.5.4 Polygon Rasterization of a 1px-width rectangle around the line segment, or using Bresenham’s line algorithm to select the FragmentDestinations.

Note: See Basic Line Segment Rasterization and Bresenham Line Segment Rasterization in the Vulkan 1.3 spec for more details of how line these line rasterization algorithms may be implemented.

23.2.5.3. Barycentric coordinates

Barycentric coordinates is a list of n numbers bi, defined for a point p inside a convex polygon with n vertices vi in framebuffer space. Each bi is in range 0 to 1, inclusive, and represents the proximity to vertex vi. Their sum is always constant:

∑ (bi) = 1

These coordinates uniquely specify any point p within the polygon (or on its boundary) as:

p = ∑ (bi × pi)

For a polygon with 3 vertices - a triangle, barycentric coordinates of any point p can be computed as follows:

Apolygon = A(v1, v2, v3) b1 = A(p, b2, b3) ÷ Apolygon b2 = A(b1, p, b3) ÷ Apolygon b3 = A(b1, b2, p) ÷ Apolygon

Where A(list of points) is the area of the polygon with the given set of vertices.

For polygons with more than 3 vertices, the exact algorithm is implementation-dependent. One of the possible implementations is to triangulate the polygon and compute the barycentrics of a point based on the triangle it falls into.

23.2.5.4. Polygon Rasterization

A polygon is front-facing if it’s oriented towards the projection. Otherwise, the polygon is back-facing.

rasterize polygon()

Arguments:

Returns: list of RasterizationPoint.

  1. Let rasterizationPoints be an empty list.

  2. Let v(i) be the framebuffer coordinates for the clipped vertex number i (starting with 1) in a rasterized polygon of n vertices.

    Note: this section uses the term "polygon" instead of a "triangle", since § 23.2.4 Primitive Clipping stage may have introduced additional vertices. This is non-observable by the application.

  3. Determine if the polygon is front-facing, which depends on the sign of the area occupied by the polygon in framebuffer coordinates:

    area = 0.5 × ((v1.x × vn.y − vn.x × v1.y) + ∑ (vi+1.x × vi.y − vi.x × vi+1.y))

    The sign of area is interpreted based on the primitive.frontFace:

    "ccw"

    area > 0 is considered front-facing, otherwise back-facing

    "cw"

    area < 0 is considered front-facing, otherwise back-facing

  4. Cull based on primitive.cullMode:

    "none"

    All polygons pass this test.

    "front"

    The front-facing polygons are discarded, and do not process in later stages of the render pipeline.

    "back"

    The back-facing polygons are discarded.

  5. Determine a set of fragments inside the polygon in framebuffer space - these are locations scheduled for the per-fragment operations. This operation is known as "point sampling". The logic is based on descriptor.multisample:

    disabled

    Fragments are associated with pixel centers. That is, all the points with coordinates C, where fract(C) = vector2(0.5, 0.5) in the framebuffer space, enclosed into the polygon, are included. If a pixel center is on the edge of the polygon, whether or not it’s included is not defined.

    Note: this becomes a subject of precision for the rasterizer.

    enabled

    Each pixel is associated with descriptor.multisample.count locations, which are implementation-defined. The locations are ordered, and the list is the same for each pixel of the framebuffer. Each location corresponds to one fragment in the multisampled framebuffer.

    The rasterizer builds a mask of locations being hit inside each pixel and provides is as "sample-mask" built-in to the fragment shader.

  6. For each produced fragment of type FragmentDestination:

    1. Let rp be a new RasterizationPoint object

    2. Compute the list b as § 23.2.5.3 Barycentric coordinates of that fragment. Set rp.barycentricCoordinates to b.

    3. Let di be the depth value of vi.

    4. Set rp.depth to ∑ (bi × di)

    5. Append rp to rasterizationPoints.

  7. Return rasterizationPoints.

23.2.6. Fragment Processing

The fragment processing stage is a programmable stage of the render pipeline that computes the fragment data (often a color) to be written into render targets.

This stage produces a Fragment for each RasterizationPoint:

process fragment(rp, descriptor, state)

Arguments:

Returns: Fragment or null.

  1. Let fragmentDesc be descriptor.fragment.

  2. Let depthStencilDesc be descriptor.depthStencil.

  3. Let fragment be a new Fragment object.

  4. Set fragment.destination to rp.destination.

  5. Set fragment.frontFacing to rp.frontFacing.

  6. Set fragment.coverageMask to rp.coverageMask.

  7. Set fragment.depth to rp.depth.

  8. If frag_depth builtin is not produced by the shader:

    1. Set fragment.depthPassed to the result of compare fragment(fragment.destination, fragment.depth, "depth", state.[[depthStencilAttachment]], depthStencilDesc?.depthCompare).

  9. Set stencilState to depthStencilDesc?.stencilFront if rp.frontFacing is true and depthStencilDesc?.stencilBack otherwise.

  10. Set fragment.stencilPassed to the result of compare fragment(fragment.destination, state.[[stencilReference]], "stencil", state.[[depthStencilAttachment]], stencilState?.compare).

  11. If fragmentDesc is not null:

    1. If fragment.depthPassed is false, the frag_depth builtin is not produced by the shader entry point, and the shader entry point does not write to any storage bindings, the following steps may be skipped.

    2. Set the shader input builtins. For each non-composite argument of the entry point, annotated as a builtin, set its value based on the annotation:

      position

      vec4<f32>(rp.destination.position, rp.depth, rp.perspectiveDivisor)

      front_facing

      rp.frontFacing

      sample_index

      rp.destination.sampleIndex

      sample_mask

      rp.coverageMask

    3. For each user-specified shader stage input of the fragment stage:

      1. Let value be the interpolated fragment input, based on rp.barycentricCoordinates, rp.primitiveVertices, and the interpolation qualifier on the input.

      2. Set the corresponding fragment shader location input to value.

    4. Invoke the fragment shader entry point described by fragmentDesc.

      The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.

    5. If the fragment issued discard, return null.

    6. Set fragment.colors to the user-specified shader stage output values from the shader.

    7. Take the shader output builtins:

      1. If frag_depth builtin is produced by the shader as value:

        1. Let vp be state.[[viewport]].

        2. Set fragment.depth to clamp(value, vp.minDepth, vp.maxDepth).

        3. Set fragment.depthPassed to the result of compare fragment(fragment.destination, fragment.depth, "depth", state.[[depthStencilAttachment]], depthStencilDesc?.depthCompare).

    8. If sample_mask builtin is produced by the shader as value:

      1. Set fragment.coverageMask to fragment.coverageMaskvalue.

    Otherwise we are in § 23.2.8 No Color Output mode, and fragment.colors is empty.

  12. Return fragment.

compare fragment(destination, value, aspect, attachment, compareFunc)

Arguments:

Returns: true if the comparison passes, or false otherwise

Processing of fragments happens in parallel, while any side effects, such as writes into GPUBufferBindingType "storage" bindings, may happen in any order.

23.2.7. Output Merging

Output merging is a fixed-function stage of the render pipeline that outputs the fragment color, depth and stencil data to be written into the render pass attachments.

process depth stencil(fragment, pipeline, state)

Arguments:

  1. Let depthStencilDesc be pipeline.[[descriptor]].depthStencil.

  2. If pipeline.[[writesDepth]] is true and fragment.depthPassed is true:

    1. Set the value of the depth aspect of state.[[depthStencilAttachment]] at fragment.destination to fragment.depth.

  3. If pipeline.[[writesStencil]] is true:

    1. Set stencilState to depthStencilDesc.stencilFront if fragment.frontFacing is true and depthStencilDesc.stencilBack otherwise.

    2. If fragment.stencilPassed is false:

      • Let stencilOp be stencilState.failOp.

      Else if fragment.depthPassed is false:

      Else:

      • Let stencilOp be stencilState.passOp.

    3. Update the value of the stencil aspect of state.[[depthStencilAttachment]] at fragment.destination by performing the operation described by stencilOp.

The depth input to this stage, if any, is clamped to the current [[viewport]] depth range (regardless of whether the fragment shader stage writes the frag_depth builtin).

process color attachments(fragment, pipeline, state)

Arguments:

  1. If fragment.depthPassed is false or fragment.stencilPassed is false, return.

  2. Let targets be pipeline.[[descriptor]].fragment.targets.

  3. For each attachment of state.[[colorAttachments]]:

    1. Let color be the value from fragment.colors that corresponds with attachment.

    2. Let targetDesc be the targets entry that corresponds with attachment.

    3. If targetDesc.blend is provided:

      1. Let colorBlend be targetDesc.blend.color.

      2. Let alphaBlend be targetDesc.blend.alpha.

      3. Set the RGB components of color to the value computed by performing the operation described by colorBlend.operation with the values described by colorBlend.srcFactor and colorBlend.dstFactor.

      4. Set the alpha component of color to the value computed by performing the operation described by alphaBlend.operation with the values described by alphaBlend.srcFactor and alphaBlend.dstFactor.

    4. Set the value of attachment at fragment.destination to color.

23.2.8. No Color Output

In no-color-output mode, pipeline does not produce any color attachment outputs.

The pipeline still performs rasterization and produces depth values based on the vertex position output. The depth testing and stencil operations can still be used.

23.2.9. Alpha to Coverage

In alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of the fragment shader output value at @location(0).

The algorithm of producing the extra mask is platform-dependent and can vary for different pixels. It guarantees that:

23.2.10. Per-Sample Shading

When rendering into multisampled render attachments, fragment shaders can be run once per-pixel or once per-sample. Fragment shaders must run once per-sample if either the sample_index builtin or sample interpolation sampling is used and contributes to the shader output. Otherwise fragment shaders may run once per-pixel with the result broadcast out to each of the samples included in the final sample mask.

When using per-sample shading, the color output for sample N is produced by the fragment shader execution with sample_index == N for the current pixel.

23.2.11. Sample Masking

The final sample mask for a pixel is computed as: rasterization mask & mask & shader-output mask.

Only the lower count bits of the mask are considered.

If the least-significant bit at position N of the final sample mask has value of "0", the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded. Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.

The rasterization mask is produced by the rasterization stage, based on the shape of the rasterized polygon. The samples included in the shape get the relevant bits 1 in the mask.

The shader-output mask takes the output value of "sample_mask" builtin in the fragment shader. If the builtin is not output from the fragment shader, and alphaToCoverageEnabled is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.

24. Type Definitions

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

typedef unsigned long long GPUSize64Out;
typedef unsigned long GPUIntegerCoordinateOut;
typedef unsigned long GPUSize32Out;

typedef unsigned long GPUFlagsConstant;

24.1. Colors & Vectors

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

Note: double is large enough to precisely hold 32-bit signed/unsigned integers and single-precision floats.

r, of type double

The red channel value.

g, of type double

The green channel value.

b, of type double

The blue channel value.

a, of type double

The alpha channel value.

For a given GPUColor value color, depending on its type, the syntax:
validate GPUColor shape(color)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if color is a sequence and color.size ≠ 4.

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;
For a given GPUOrigin2D value origin, depending on its type, the syntax:
validate GPUOrigin2D shape(origin)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if origin is a sequence and origin.size > 2.

dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;
For a given GPUOrigin3D value origin, depending on its type, the syntax:
validate GPUOrigin3D shape(origin)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if origin is a sequence and origin.size > 3.

dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    GPUIntegerCoordinate height = 1;
    GPUIntegerCoordinate depthOrArrayLayers = 1;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;
width, of type GPUIntegerCoordinate

The width of the extent.

height, of type GPUIntegerCoordinate, defaulting to 1

The height of the extent.

depthOrArrayLayers, of type GPUIntegerCoordinate, defaulting to 1

The depth of the extent or the number of array layers it contains. If used with a GPUTexture with a GPUTextureDimension of "3d" defines the depth of the texture. If used with a GPUTexture with a GPUTextureDimension of "2d" defines the number of array layers in the texture.

For a given GPUExtent3D value extent, depending on its type, the syntax:
validate GPUExtent3D shape(extent)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if:

25. Feature Index

25.1. "core-features-and-limits"

Allows all Core WebGPU features and limits to be used.

Note: This is currently available on all adapters and enabled automatically on all devices even if not requested.

25.2. "depth-clip-control"

Allows depth clipping to be disabled.

This feature adds the following optional API surfaces:

25.3. "depth32float-stencil8"

Allows for explicit creation of textures of format "depth32float-stencil8".

This feature adds the following optional API surfaces:

25.4. "texture-compression-bc"

Allows for explicit creation of textures of BC compressed formats which include the "S3TC", "RGTC", and "BPTC" formats. Only supports 2D textures.

Note: Adapters which support "texture-compression-bc" do not always support "texture-compression-bc-sliced-3d". To use "texture-compression-bc-sliced-3d", "texture-compression-bc" must be enabled explicitly as this feature does not enable the BC formats.

This feature adds the following optional API surfaces:

25.5. "texture-compression-bc-sliced-3d"

Allows the 3d dimension for textures with BC compressed formats.

Note: Adapters which support "texture-compression-bc" do not always support "texture-compression-bc-sliced-3d". To use "texture-compression-bc-sliced-3d", "texture-compression-bc" must be enabled explicitly as this feature does not enable the BC formats.

This feature adds no optional API surfaces.

25.6. "texture-compression-etc2"

Allows for explicit creation of textures of ETC2 compressed formats. Only supports 2D textures.

This feature adds the following optional API surfaces:

25.7. "texture-compression-astc"

Allows for explicit creation of textures of ASTC compressed formats. Only supports 2D textures.

This feature adds the following optional API surfaces:

25.8. "texture-compression-astc-sliced-3d"

Allows the 3d dimension for textures with ASTC compressed formats.

Note: Adapters which support "texture-compression-astc" do not always support "texture-compression-astc-sliced-3d". To use "texture-compression-astc-sliced-3d", "texture-compression-astc" must be enabled explicitly as this feature does not enable the ASTC formats.

This feature adds no optional API surfaces.

25.9. "timestamp-query"

Adds the ability to query timestamps from GPU command buffers. See § 20.4 Timestamp Query.

This feature adds the following optional API surfaces:

25.10. "indirect-first-instance"

Allows the use of non-zero firstInstance values in indirect draw parameters and indirect drawIndexed parameters.

This feature adds no optional API surfaces.

25.11. "shader-f16"

Allows the use of the half-precision floating-point type f16 in WGSL.

This feature adds the following optional API surfaces:

25.12. "rg11b10ufloat-renderable"

Allows the RENDER_ATTACHMENT usage on textures with format "rg11b10ufloat", and also allows textures of that format to be blended, multisampled, and resolved.

This feature adds no optional API surfaces.

Enabling "texture-formats-tier1" at device creation will also enable "rg11b10ufloat-renderable".

25.13. "bgra8unorm-storage"

Allows the STORAGE_BINDING usage on textures with format "bgra8unorm".

This feature adds no optional API surfaces.

25.14. "float32-filterable"

Makes textures with formats "r32float", "rg32float", and "rgba32float" filterable.

25.15. "float32-blendable"

Makes textures with formats "r32float", "rg32float", and "rgba32float" blendable.

25.16. "clip-distances"

Allows the use of clip_distances in WGSL.

This feature adds the following optional API surfaces:

25.17. "dual-source-blending"

Allows the use of blend_src in WGSL and simultaneously using both pixel shader outputs (@blend_src(0) and @blend_src(1)) as inputs to a blending operation with the single color attachment at location 0.

This feature adds the following optional API surfaces:

25.18. "subgroups"

Allows the use of the subgroup and quad operations in WGSL.

This feature adds no optional API surfaces, but the following entries of GPUAdapterInfo expose real values whenever the feature is available on the adapter:

25.19. "texture-formats-tier1"

Supports the below new GPUTextureFormats with the RENDER_ATTACHMENT, blendable, multisampling capabilities and the STORAGE_BINDING capability with the "read-only" and "write-only" GPUStorageTextureAccesses:

Allows the RENDER_ATTACHMENT, blendable, multisampling and resolve capabilities on below GPUTextureFormats:

Allows the "read-only" or "write-only" GPUStorageTextureAccess on below GPUTextureFormats:

Enabling "texture-formats-tier2" at device creation will also enable "texture-formats-tier1".

Enabling "texture-formats-tier1" at device creation will also enable "rg11b10ufloat-renderable".

25.20. "texture-formats-tier2"

Allows the "read-write" GPUStorageTextureAccess on below GPUTextureFormats:

Enabling "texture-formats-tier2" at device creation will also enable "texture-formats-tier1".

26. Appendices

26.1. Texture Format Capabilities

26.1.1. Plain color formats

All supported plain color formats support usages COPY_SRC, COPY_DST, and TEXTURE_BINDING, and dimension "3d".

The RENDER_ATTACHMENT and STORAGE_BINDING columns specify support for GPUTextureUsage.RENDER_ATTACHMENT and GPUTextureUsage.STORAGE_BINDING usage respectively.

The render target pixel byte cost and render target component alignment are used to validate the maxColorAttachmentBytesPerSample limit.

Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.

Format Required Feature GPUTextureSampleType RENDER_ATTACHMENT blendable multisampling resolve STORAGE_BINDING Texel block copy footprint (Bytes) Render target pixel byte cost (Bytes)
"write-only" "read-only" "read-write"
8 bits per component (1-byte render target component alignment)
r8unorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 1
r8snorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 1
r8uint "uint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 1
r8sint "sint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 1
rg8unorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 2
rg8snorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 2
rg8uint "uint" If "texture-formats-tier1" is enabled 2
rg8sint "sint" If "texture-formats-tier1" is enabled 2
rgba8unorm "float",
"unfilterable-float"
If "texture-formats-tier2" is enabled 4 8
rgba8unorm-srgb "float",
"unfilterable-float"
4 8
rgba8snorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 4
rgba8uint "uint" If "texture-formats-tier2" is enabled 4
rgba8sint "sint" If "texture-formats-tier2" is enabled 4
bgra8unorm "float",
"unfilterable-float"
If "bgra8unorm-storage" is enabled 4 8
bgra8unorm-srgb "float",
"unfilterable-float"
4 8
16 bits per component (2-byte render target component alignment)
r16unorm "texture-formats-tier1" "unfilterable-float" 2
r16snorm "texture-formats-tier1" "unfilterable-float" 2
r16uint "uint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 2
r16sint "sint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 2
r16float "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 2
rg16unorm "texture-formats-tier1" "unfilterable-float" 4
rg16snorm "texture-formats-tier1" "unfilterable-float" 4
rg16uint "uint" If "texture-formats-tier1" is enabled 4
rg16sint "sint" If "texture-formats-tier1" is enabled 4
rg16float "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 4
rgba16unorm "texture-formats-tier1" "unfilterable-float" 8
rgba16snorm "texture-formats-tier1" "unfilterable-float" 8
rgba16uint "uint" If "texture-formats-tier2" is enabled 8
rgba16sint "sint" If "texture-formats-tier2" is enabled 8
rgba16float "float",
"unfilterable-float"
If "texture-formats-tier2" is enabled 8
32 bits per component (4-byte render target component alignment)
r32uint "uint" 4
r32sint "sint" 4
r32float "unfilterable-float" If "float32-blendable" is enabled 4
rg32uint "uint" 8
rg32sint "sint" 8
rg32float "unfilterable-float" If "float32-blendable" is enabled 8
rgba32uint "uint" If "texture-formats-tier2" is enabled 16
rgba32sint "sint" If "texture-formats-tier2" is enabled 16
rgba32float "unfilterable-float" If "float32-blendable" is enabled If "texture-formats-tier2" is enabled 16
mixed component width, 32 bits per texel (4-byte render target component alignment)
rgb10a2uint "uint" If "texture-formats-tier1" is enabled 4 8
rgb10a2unorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 4 8
rg11b10ufloat "float",
"unfilterable-float"
If "rg11b10ufloat-renderable" is enabled If "texture-formats-tier1" is enabled 4 8

26.1.2. Depth-stencil formats

A depth-or-stencil format is any format with depth and/or stencil aspects. A combined depth-stencil format is a depth-or-stencil format that has both depth and stencil aspects.

All depth-or-stencil formats support the COPY_SRC, COPY_DST, TEXTURE_BINDING, and RENDER_ATTACHMENT usages. All of these formats support multisampling. However, certain copy operations also restrict the source and destination formats, and none of these formats support textures with "3d" dimension.

Depth textures cannot be used with "filtering" samplers, but can always be used with "comparison" samplers even if they use filtering.

Format
NOTE:
Texel block memory cost (Bytes)
Aspect GPUTextureSampleType Valid texel copy source Valid texel copy destination Texel block copy footprint (Bytes) Aspect-specific format
stencil8 1 − 4 stencil "uint" 1 stencil8
depth16unorm 2 depth "depth", "unfilterable-float" 2 depth16unorm
depth24plus 4 depth "depth", "unfilterable-float" depth24plus
depth24plus-stencil8 4 − 8 depth "depth", "unfilterable-float" depth24plus
stencil "uint" 1 stencil8
depth32float 4 depth "depth", "unfilterable-float" 4 depth32float
depth32float-stencil8 5 − 8 depth "depth", "unfilterable-float" 4 depth32float
stencil "uint" 1 stencil8

24-bit depth refers to a 24-bit unsigned normalized depth format with a range from 0.0 to 1.0, which would be spelled "depth24unorm" if exposed.

26.1.2.1. Reading and Sampling Depth/Stencil Textures

It is possible to bind a depth-aspect GPUTextureView to either a texture_depth_* binding or a binding with other non-depth 2d/cube texture types.

A stencil-aspect GPUTextureView must be bound to a normal texture binding type. The sampleType in the GPUBindGroupLayout must be "uint".

Reading or sampling the depth or stencil aspect of a texture behaves as if the texture contains the values (V, X, X, X), where V is the actual depth or stencil value, and each X is an implementation-defined unspecified value.

For depth-aspect bindings, the unspecified values are not visible through bindings with texture_depth_* types.

If a depth texture is bound to tex with type texture_2d<f32>:

Note: Short of adding a new more constrained stencil sampler type (like depth), it’s infeasible for implementations to efficiently paper over the driver differences for depth/stencil reads. As this was not a portability pain point for WebGL, it’s not expected to be problematic in WebGPU. In practice, expect either (V, V, V, V) or (V, 0, 0, 1) (where V is the depth or stencil value), depending on hardware.

26.1.2.2. Copying Depth/Stencil Textures

The depth aspects of depth32float formats ("depth32float" and "depth32float-stencil8" have a limited range. As a result, copies into such textures are only valid from other textures of the same format.

The depth aspects of depth24plus formats ("depth24plus" and "depth24plus-stencil8") have opaque representations (implemented as either 24-bit depth or "depth32float"). As a result, depth-aspect texel copies are not allowed with these formats.

NOTE:
It is possible to imitate these disallowed copies:

26.1.3. Packed formats

All packed texture formats support COPY_SRC, COPY_DST, and TEXTURE_BINDING usages. All of these formats are filterable. None of these formats are renderable or support multisampling.

A compressed format is any format with a block size greater than 1×1.

Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.

Format Texel block copy footprint (Bytes) GPUTextureSampleType Texel block width/height "3d" Feature
rgb9e5ufloat 4 "float",
"unfilterable-float"
1 × 1
bc1-rgba-unorm 8 "float",
"unfilterable-float"
4 × 4 If "texture-compression-bc-sliced-3d" is enabled texture-compression-bc
bc1-rgba-unorm-srgb
bc2-rgba-unorm 16
bc2-rgba-unorm-srgb
bc3-rgba-unorm 16
bc3-rgba-unorm-srgb
bc4-r-unorm 8
bc4-r-snorm
bc5-rg-unorm 16
bc5-rg-snorm
bc6h-rgb-ufloat 16
bc6h-rgb-float
bc7-rgba-unorm 16
bc7-rgba-unorm-srgb
etc2-rgb8unorm 8 "float",
"unfilterable-float"
4 × 4 texture-compression-etc2
etc2-rgb8unorm-srgb
etc2-rgb8a1unorm 8
etc2-rgb8a1unorm-srgb
etc2-rgba8unorm 16
etc2-rgba8unorm-srgb
eac-r11unorm 8
eac-r11snorm
eac-rg11unorm 16
eac-rg11snorm
astc-4x4-unorm 16 "float",
"unfilterable-float"
4 × 4 If "texture-compression-astc-sliced-3d" is enabled texture-compression-astc
astc-4x4-unorm-srgb
astc-5x4-unorm 16 5 × 4
astc-5x4-unorm-srgb
astc-5x5-unorm 16 5 × 5
astc-5x5-unorm-srgb
astc-6x5-unorm 16 6 × 5
astc-6x5-unorm-srgb
astc-6x6-unorm 16 6 × 6
astc-6x6-unorm-srgb
astc-8x5-unorm 16 8 × 5
astc-8x5-unorm-srgb
astc-8x6-unorm 16 8 × 6
astc-8x6-unorm-srgb
astc-8x8-unorm 16 8 × 8
astc-8x8-unorm-srgb
astc-10x5-unorm 16 10 × 5
astc-10x5-unorm-srgb
astc-10x6-unorm 16 10 × 6
astc-10x6-unorm-srgb
astc-10x8-unorm 16 10 × 8
astc-10x8-unorm-srgb
astc-10x10-unorm 16 10 × 10
astc-10x10-unorm-srgb
astc-12x10-unorm 16 12 × 10
astc-12x10-unorm-srgb
astc-12x12-unorm 16 12 × 12
astc-12x12-unorm-srgb

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
[HR-TIME-3]
Yoav Weiss. High Resolution Time. 7 November 2024. WD. URL: https://www.w3.org/TR/hr-time-3/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[I18N-GLOSSARY]
Richard Ishida; Addison Phillips. Internationalization Glossary. 17 October 2024. NOTE. URL: https://www.w3.org/TR/i18n-glossary/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[WEBCODECS]
Paul Adenot; Eugene Zemtsov. WebCodecs. 14 May 2025. WD. URL: https://www.w3.org/TR/webcodecs/
[WEBGL-1]
Dean Jackson; Jeff Gilbert. WebGL Specification, Version 1.0. 9 August 2017. URL: https://www.khronos.org/registry/webgl/specs/latest/1.0/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[WEBXR]
Brandon Jones; Manish Goregaokar; Rik Cabanier. WebXR Device API. 17 April 2025. CRD. URL: https://www.w3.org/TR/webxr/
[WGSL]
Alan Baker; Mehmet Oguz Derin; David Neto. WebGPU Shading Language. 3 July 2025. CRD. URL: https://www.w3.org/TR/WGSL/

Informative References

[MEDIAQUERIES-5]
Dean Jackson; et al. Media Queries Level 5. 18 December 2021. WD. URL: https://www.w3.org/TR/mediaqueries-5/
[SERVICE-WORKERS]
Yoshisato Yanagisawa; Monica CHINTALA. Service Workers. 6 March 2025. CRD. URL: https://www.w3.org/TR/service-workers/
[VULKAN]
The Khronos Vulkan Working Group. Vulkan 1.3. URL: https://registry.khronos.org/vulkan/specs/1.3/html/vkspec.html

IDL Index

interface mixin GPUObjectBase {
    attribute USVString label;
};

dictionary GPUObjectDescriptorBase {
    USVString label = "";
};

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedLimits {
    readonly attribute unsigned long maxTextureDimension1D;
    readonly attribute unsigned long maxTextureDimension2D;
    readonly attribute unsigned long maxTextureDimension3D;
    readonly attribute unsigned long maxTextureArrayLayers;
    readonly attribute unsigned long maxBindGroups;
    readonly attribute unsigned long maxBindGroupsPlusVertexBuffers;
    readonly attribute unsigned long maxBindingsPerBindGroup;
    readonly attribute unsigned long maxDynamicUniformBuffersPerPipelineLayout;
    readonly attribute unsigned long maxDynamicStorageBuffersPerPipelineLayout;
    readonly attribute unsigned long maxSampledTexturesPerShaderStage;
    readonly attribute unsigned long maxSamplersPerShaderStage;
    readonly attribute unsigned long maxStorageBuffersPerShaderStage;
    readonly attribute unsigned long maxStorageTexturesPerShaderStage;
    readonly attribute unsigned long maxUniformBuffersPerShaderStage;
    readonly attribute unsigned long long maxUniformBufferBindingSize;
    readonly attribute unsigned long long maxStorageBufferBindingSize;
    readonly attribute unsigned long minUniformBufferOffsetAlignment;
    readonly attribute unsigned long minStorageBufferOffsetAlignment;
    readonly attribute unsigned long maxVertexBuffers;
    readonly attribute unsigned long long maxBufferSize;
    readonly attribute unsigned long maxVertexAttributes;
    readonly attribute unsigned long maxVertexBufferArrayStride;
    readonly attribute unsigned long maxInterStageShaderVariables;
    readonly attribute unsigned long maxColorAttachments;
    readonly attribute unsigned long maxColorAttachmentBytesPerSample;
    readonly attribute unsigned long maxComputeWorkgroupStorageSize;
    readonly attribute unsigned long maxComputeInvocationsPerWorkgroup;
    readonly attribute unsigned long maxComputeWorkgroupSizeX;
    readonly attribute unsigned long maxComputeWorkgroupSizeY;
    readonly attribute unsigned long maxComputeWorkgroupSizeZ;
    readonly attribute unsigned long maxComputeWorkgroupsPerDimension;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedFeatures {
    readonly setlike<DOMString>;
};

[Exposed=(Window, Worker), SecureContext]
interface WGSLLanguageFeatures {
    readonly setlike<DOMString>;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapterInfo {
    readonly attribute DOMString vendor;
    readonly attribute DOMString architecture;
    readonly attribute DOMString device;
    readonly attribute DOMString description;
    readonly attribute unsigned long subgroupMinSize;
    readonly attribute unsigned long subgroupMaxSize;
    readonly attribute boolean isFallbackAdapter;
};

interface mixin NavigatorGPU {
    [SameObject, SecureContext] readonly attribute GPU gpu;
};
Navigator includes NavigatorGPU;
WorkerNavigator includes NavigatorGPU;

[Exposed=(Window, Worker), SecureContext]
interface GPU {
    Promise<GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {});
    GPUTextureFormat getPreferredCanvasFormat();
    [SameObject] readonly attribute WGSLLanguageFeatures wgslLanguageFeatures;
};

dictionary GPURequestAdapterOptions {
    DOMString featureLevel = "core";
    GPUPowerPreference powerPreference;
    boolean forceFallbackAdapter = false;
    boolean xrCompatible = false;
};

enum GPUPowerPreference {
    "low-power",
    "high-performance",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapter {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo info;

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

dictionary GPUDeviceDescriptor
         : GPUObjectDescriptorBase {
    sequence<GPUFeatureName> requiredFeatures = [];
    record<DOMString, (GPUSize64 or undefined)> requiredLimits = {};
    GPUQueueDescriptor defaultQueue = {};
};

enum GPUFeatureName {
    "core-features-and-limits",
    "depth-clip-control",
    "depth32float-stencil8",
    "texture-compression-bc",
    "texture-compression-bc-sliced-3d",
    "texture-compression-etc2",
    "texture-compression-astc",
    "texture-compression-astc-sliced-3d",
    "timestamp-query",
    "indirect-first-instance",
    "shader-f16",
    "rg11b10ufloat-renderable",
    "bgra8unorm-storage",
    "float32-filterable",
    "float32-blendable",
    "clip-distances",
    "dual-source-blending",
    "subgroups",
    "texture-formats-tier1",
    "texture-formats-tier2",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo adapterInfo;

    [SameObject] readonly attribute GPUQueue queue;

    undefined destroy();

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});
    GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor);

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);
    Promise<GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor);
    Promise<GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

[Exposed=(Window, Worker), SecureContext]
interface GPUBuffer {
    readonly attribute GPUSize64Out size;
    readonly attribute GPUFlagsConstant usage;

    readonly attribute GPUBufferMapState mapState;

    Promise<undefined> mapAsync(GPUMapModeFlags mode, optional GPUSize64 offset = 0, optional GPUSize64 size);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined unmap();

    undefined destroy();
};
GPUBuffer includes GPUObjectBase;

enum GPUBufferMapState {
    "unmapped",
    "pending",
    "mapped",
};

dictionary GPUBufferDescriptor
         : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
    boolean mappedAtCreation = false;
};

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUBufferUsage {
    const GPUFlagsConstant MAP_READ      = 0x0001;
    const GPUFlagsConstant MAP_WRITE     = 0x0002;
    const GPUFlagsConstant COPY_SRC      = 0x0004;
    const GPUFlagsConstant COPY_DST      = 0x0008;
    const GPUFlagsConstant INDEX         = 0x0010;
    const GPUFlagsConstant VERTEX        = 0x0020;
    const GPUFlagsConstant UNIFORM       = 0x0040;
    const GPUFlagsConstant STORAGE       = 0x0080;
    const GPUFlagsConstant INDIRECT      = 0x0100;
    const GPUFlagsConstant QUERY_RESOLVE = 0x0200;
};

typedef [EnforceRange] unsigned long GPUMapModeFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUMapMode {
    const GPUFlagsConstant READ  = 0x0001;
    const GPUFlagsConstant WRITE = 0x0002;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    undefined destroy();

    readonly attribute GPUIntegerCoordinateOut width;
    readonly attribute GPUIntegerCoordinateOut height;
    readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers;
    readonly attribute GPUIntegerCoordinateOut mipLevelCount;
    readonly attribute GPUSize32Out sampleCount;
    readonly attribute GPUTextureDimension dimension;
    readonly attribute GPUTextureFormat format;
    readonly attribute GPUFlagsConstant usage;
};
GPUTexture includes GPUObjectBase;

dictionary GPUTextureDescriptor
         : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
    sequence<GPUTextureFormat> viewFormats = [];
};

enum GPUTextureDimension {
    "1d",
    "2d",
    "3d",
};

typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUTextureUsage {
    const GPUFlagsConstant COPY_SRC          = 0x01;
    const GPUFlagsConstant COPY_DST          = 0x02;
    const GPUFlagsConstant TEXTURE_BINDING   = 0x04;
    const GPUFlagsConstant STORAGE_BINDING   = 0x08;
    const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

dictionary GPUTextureViewDescriptor
         : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureUsageFlags usage = 0;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount;
};

enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d",
};

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only",
};

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16unorm",
    "r16snorm",
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16unorm",
    "rg16snorm",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb9e5ufloat",
    "rgb10a2uint",
    "rgb10a2unorm",
    "rg11b10ufloat",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16unorm",
    "rgba16snorm",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth/stencil formats
    "stencil8",
    "depth16unorm",
    "depth24plus",
    "depth24plus-stencil8",
    "depth32float",

    // "depth32float-stencil8" feature
    "depth32float-stencil8",

    // BC compressed formats usable if "texture-compression-bc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "bc1-rgba-unorm",
    "bc1-rgba-unorm-srgb",
    "bc2-rgba-unorm",
    "bc2-rgba-unorm-srgb",
    "bc3-rgba-unorm",
    "bc3-rgba-unorm-srgb",
    "bc4-r-unorm",
    "bc4-r-snorm",
    "bc5-rg-unorm",
    "bc5-rg-snorm",
    "bc6h-rgb-ufloat",
    "bc6h-rgb-float",
    "bc7-rgba-unorm",
    "bc7-rgba-unorm-srgb",

    // ETC2 compressed formats usable if "texture-compression-etc2" is both
    // supported by the device/user agent and enabled in requestDevice.
    "etc2-rgb8unorm",
    "etc2-rgb8unorm-srgb",
    "etc2-rgb8a1unorm",
    "etc2-rgb8a1unorm-srgb",
    "etc2-rgba8unorm",
    "etc2-rgba8unorm-srgb",
    "eac-r11unorm",
    "eac-r11snorm",
    "eac-rg11unorm",
    "eac-rg11snorm",

    // ASTC compressed formats usable if "texture-compression-astc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "astc-4x4-unorm",
    "astc-4x4-unorm-srgb",
    "astc-5x4-unorm",
    "astc-5x4-unorm-srgb",
    "astc-5x5-unorm",
    "astc-5x5-unorm-srgb",
    "astc-6x5-unorm",
    "astc-6x5-unorm-srgb",
    "astc-6x6-unorm",
    "astc-6x6-unorm-srgb",
    "astc-8x5-unorm",
    "astc-8x5-unorm-srgb",
    "astc-8x6-unorm",
    "astc-8x6-unorm-srgb",
    "astc-8x8-unorm",
    "astc-8x8-unorm-srgb",
    "astc-10x5-unorm",
    "astc-10x5-unorm-srgb",
    "astc-10x6-unorm",
    "astc-10x6-unorm-srgb",
    "astc-10x8-unorm",
    "astc-10x8-unorm-srgb",
    "astc-10x10-unorm",
    "astc-10x10-unorm-srgb",
    "astc-12x10-unorm",
    "astc-12x10-unorm-srgb",
    "astc-12x12-unorm",
    "astc-12x12-unorm-srgb",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUExternalTexture {
};
GPUExternalTexture includes GPUObjectBase;

dictionary GPUExternalTextureDescriptor
         : GPUObjectDescriptorBase {
    required (HTMLVideoElement or VideoFrame) source;
    PredefinedColorSpace colorSpace = "srgb";
};

[Exposed=(Window, Worker), SecureContext]
interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

dictionary GPUSamplerDescriptor
         : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUMipmapFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 32;
    GPUCompareFunction compare;
    [Clamp] unsigned short maxAnisotropy = 1;
};

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat",
};

enum GPUFilterMode {
    "nearest",
    "linear",
};

enum GPUMipmapFilterMode {
    "nearest",
    "linear",
};

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

dictionary GPUBindGroupLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;

    GPUBufferBindingLayout buffer;
    GPUSamplerBindingLayout sampler;
    GPUTextureBindingLayout texture;
    GPUStorageTextureBindingLayout storageTexture;
    GPUExternalTextureBindingLayout externalTexture;
};

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUShaderStage {
    const GPUFlagsConstant VERTEX   = 0x1;
    const GPUFlagsConstant FRAGMENT = 0x2;
    const GPUFlagsConstant COMPUTE  = 0x4;
};

enum GPUBufferBindingType {
    "uniform",
    "storage",
    "read-only-storage",
};

dictionary GPUBufferBindingLayout {
    GPUBufferBindingType type = "uniform";
    boolean hasDynamicOffset = false;
    GPUSize64 minBindingSize = 0;
};

enum GPUSamplerBindingType {
    "filtering",
    "non-filtering",
    "comparison",
};

dictionary GPUSamplerBindingLayout {
    GPUSamplerBindingType type = "filtering";
};

enum GPUTextureSampleType {
    "float",
    "unfilterable-float",
    "depth",
    "sint",
    "uint",
};

dictionary GPUTextureBindingLayout {
    GPUTextureSampleType sampleType = "float";
    GPUTextureViewDimension viewDimension = "2d";
    boolean multisampled = false;
};

enum GPUStorageTextureAccess {
    "write-only",
    "read-only",
    "read-write",
};

dictionary GPUStorageTextureBindingLayout {
    GPUStorageTextureAccess access = "write-only";
    required GPUTextureFormat format;
    GPUTextureViewDimension viewDimension = "2d";
};

dictionary GPUExternalTextureBindingLayout {
};

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

dictionary GPUBindGroupDescriptor
         : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

typedef (GPUSampler or
         GPUTexture or
         GPUTextureView or
         GPUBuffer or
         GPUBufferBinding or
         GPUExternalTexture) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};

dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

dictionary GPUPipelineLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout?> bindGroupLayouts;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> getCompilationInfo();
};
GPUShaderModule includes GPUObjectBase;

dictionary GPUShaderModuleDescriptor
         : GPUObjectDescriptorBase {
    required USVString code;
    sequence<GPUShaderModuleCompilationHint> compilationHints = [];
};

dictionary GPUShaderModuleCompilationHint {
    required USVString entryPoint;
    (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info",
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
    readonly attribute unsigned long long offset;
    readonly attribute unsigned long long length;
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationInfo {
    readonly attribute FrozenArray<GPUCompilationMessage> messages;
};

[Exposed=(Window, Worker), SecureContext, Serializable]
interface GPUPipelineError : DOMException {
    constructor(optional DOMString message = "", GPUPipelineErrorInit options);
    readonly attribute GPUPipelineErrorReason reason;
};

dictionary GPUPipelineErrorInit {
    required GPUPipelineErrorReason reason;
};

enum GPUPipelineErrorReason {
    "validation",
    "internal",
};

enum GPUAutoLayoutMode {
    "auto",
};

dictionary GPUPipelineDescriptorBase
         : GPUObjectDescriptorBase {
    required (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};

interface mixin GPUPipelineBase {
    [NewObject] GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

dictionary GPUProgrammableStage {
    required GPUShaderModule module;
    USVString entryPoint;
    record<USVString, GPUPipelineConstantValue> constants = {};
};

typedef double GPUPipelineConstantValue; // May represent WGSL's bool, f32, i32, u32, and f16 if enabled.

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

dictionary GPUComputePipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUProgrammableStage compute;
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

dictionary GPURenderPipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUVertexState vertex;
    GPUPrimitiveState primitive = {};
    GPUDepthStencilState depthStencil;
    GPUMultisampleState multisample = {};
    GPUFragmentState fragment;
};

dictionary GPUPrimitiveState {
    GPUPrimitiveTopology topology = "triangle-list";
    GPUIndexFormat stripIndexFormat;
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    // Requires "depth-clip-control" feature.
    boolean unclippedDepth = false;
};

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip",
};

enum GPUFrontFace {
    "ccw",
    "cw",
};

enum GPUCullMode {
    "none",
    "front",
    "back",
};

dictionary GPUMultisampleState {
    GPUSize32 count = 1;
    GPUSampleMask mask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};

dictionary GPUFragmentState
         : GPUProgrammableStage {
    required sequence<GPUColorTargetState?> targets;
};

dictionary GPUColorTargetState {
    required GPUTextureFormat format;

    GPUBlendState blend;
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};

dictionary GPUBlendState {
    required GPUBlendComponent color;
    required GPUBlendComponent alpha;
};

typedef [EnforceRange] unsigned long GPUColorWriteFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUColorWrite {
    const GPUFlagsConstant RED   = 0x1;
    const GPUFlagsConstant GREEN = 0x2;
    const GPUFlagsConstant BLUE  = 0x4;
    const GPUFlagsConstant ALPHA = 0x8;
    const GPUFlagsConstant ALL   = 0xF;
};

dictionary GPUBlendComponent {
    GPUBlendOperation operation = "add";
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
};

enum GPUBlendFactor {
    "zero",
    "one",
    "src",
    "one-minus-src",
    "src-alpha",
    "one-minus-src-alpha",
    "dst",
    "one-minus-dst",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "constant",
    "one-minus-constant",
    "src1",
    "one-minus-src1",
    "src1-alpha",
    "one-minus-src1-alpha",
};

enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max",
};

dictionary GPUDepthStencilState {
    required GPUTextureFormat format;

    boolean depthWriteEnabled;
    GPUCompareFunction depthCompare;

    GPUStencilFaceState stencilFront = {};
    GPUStencilFaceState stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};

dictionary GPUStencilFaceState {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap",
};

enum GPUIndexFormat {
    "uint16",
    "uint32",
};

enum GPUVertexFormat {
    "uint8",
    "uint8x2",
    "uint8x4",
    "sint8",
    "sint8x2",
    "sint8x4",
    "unorm8",
    "unorm8x2",
    "unorm8x4",
    "snorm8",
    "snorm8x2",
    "snorm8x4",
    "uint16",
    "uint16x2",
    "uint16x4",
    "sint16",
    "sint16x2",
    "sint16x4",
    "unorm16",
    "unorm16x2",
    "unorm16x4",
    "snorm16",
    "snorm16x2",
    "snorm16x4",
    "float16",
    "float16x2",
    "float16x4",
    "float32",
    "float32x2",
    "float32x3",
    "float32x4",
    "uint32",
    "uint32x2",
    "uint32x3",
    "uint32x4",
    "sint32",
    "sint32x2",
    "sint32x3",
    "sint32x4",
    "unorm10-10-10-2",
    "unorm8x4-bgra",
};

enum GPUVertexStepMode {
    "vertex",
    "instance",
};

dictionary GPUVertexState
         : GPUProgrammableStage {
    sequence<GPUVertexBufferLayout?> buffers = [];
};

dictionary GPUVertexBufferLayout {
    required GPUSize64 arrayStride;
    GPUVertexStepMode stepMode = "vertex";
    required sequence<GPUVertexAttribute> attributes;
};

dictionary GPUVertexAttribute {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};

dictionary GPUTexelCopyBufferLayout {
    GPUSize64 offset = 0;
    GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage;
};

dictionary GPUTexelCopyBufferInfo
         : GPUTexelCopyBufferLayout {
    required GPUBuffer buffer;
};

dictionary GPUTexelCopyTextureInfo {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
    GPUTextureAspect aspect = "all";
};

dictionary GPUCopyExternalImageDestInfo
         : GPUTexelCopyTextureInfo {
    PredefinedColorSpace colorSpace = "srgb";
    boolean premultipliedAlpha = false;
};

typedef (ImageBitmap or
         ImageData or
         HTMLImageElement or
         HTMLVideoElement or
         VideoFrame or
         HTMLCanvasElement or
         OffscreenCanvas) GPUCopyExternalImageSource;

dictionary GPUCopyExternalImageSourceInfo {
    required GPUCopyExternalImageSource source;
    GPUOrigin2D origin = {};
    boolean flipY = false;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

dictionary GPUCommandBufferDescriptor
         : GPUObjectDescriptorBase {
};

interface mixin GPUCommandsMixin {
};

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUBuffer destination,
        optional GPUSize64 size);
    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        optional GPUSize64 size);

    undefined copyBufferToTexture(
        GPUTexelCopyBufferInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToBuffer(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyBufferInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToTexture(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined clearBuffer(
        GPUBuffer buffer,
        optional GPUSize64 offset = 0,
        optional GPUSize64 size);

    undefined resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;
GPUCommandEncoder includes GPUCommandsMixin;
GPUCommandEncoder includes GPUDebugCommandsMixin;

dictionary GPUCommandEncoderDescriptor
         : GPUObjectDescriptorBase {
};

interface mixin GPUBindingCommandsMixin {
    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        [AllowShared] Uint32Array dynamicOffsetsData,
        GPUSize64 dynamicOffsetsDataStart,
        GPUSize32 dynamicOffsetsDataLength);
};

interface mixin GPUDebugCommandsMixin {
    undefined pushDebugGroup(USVString groupLabel);
    undefined popDebugGroup();
    undefined insertDebugMarker(USVString markerLabel);
};

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePassEncoder {
    undefined setPipeline(GPUComputePipeline pipeline);
    undefined dispatchWorkgroups(GPUSize32 workgroupCountX, optional GPUSize32 workgroupCountY = 1, optional GPUSize32 workgroupCountZ = 1);
    undefined dispatchWorkgroupsIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    undefined end();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUCommandsMixin;
GPUComputePassEncoder includes GPUDebugCommandsMixin;
GPUComputePassEncoder includes GPUBindingCommandsMixin;

dictionary GPUComputePassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};

dictionary GPUComputePassDescriptor
         : GPUObjectDescriptorBase {
    GPUComputePassTimestampWrites timestampWrites;
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPassEncoder {
    undefined setViewport(float x, float y,
        float width, float height,
        float minDepth, float maxDepth);

    undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    undefined setBlendConstant(GPUColor color);
    undefined setStencilReference(GPUStencilValue reference);

    undefined beginOcclusionQuery(GPUSize32 queryIndex);
    undefined endOcclusionQuery();

    undefined executeBundles(sequence<GPURenderBundle> bundles);
    undefined end();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUCommandsMixin;
GPURenderPassEncoder includes GPUDebugCommandsMixin;
GPURenderPassEncoder includes GPUBindingCommandsMixin;
GPURenderPassEncoder includes GPURenderCommandsMixin;

dictionary GPURenderPassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};

dictionary GPURenderPassDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachment?> colorAttachments;
    GPURenderPassDepthStencilAttachment depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
    GPURenderPassTimestampWrites timestampWrites;
    GPUSize64 maxDrawCount = 50000000;
};

dictionary GPURenderPassColorAttachment {
    required (GPUTexture or GPUTextureView) view;
    GPUIntegerCoordinate depthSlice;
    (GPUTexture or GPUTextureView) resolveTarget;

    GPUColor clearValue;
    required GPULoadOp loadOp;
    required GPUStoreOp storeOp;
};

dictionary GPURenderPassDepthStencilAttachment {
    required (GPUTexture or GPUTextureView) view;

    float depthClearValue;
    GPULoadOp depthLoadOp;
    GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    GPUStencilValue stencilClearValue = 0;
    GPULoadOp stencilLoadOp;
    GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};

enum GPULoadOp {
    "load",
    "clear",
};

enum GPUStoreOp {
    "store",
    "discard",
};

dictionary GPURenderPassLayout
         : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat?> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};

interface mixin GPURenderCommandsMixin {
    undefined setPipeline(GPURenderPipeline pipeline);

    undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat, optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer? buffer, optional GPUSize64 offset = 0, optional GPUSize64 size);

    undefined draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    undefined drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstIndex = 0,
        optional GPUSignedOffset32 baseVertex = 0,
        optional GPUSize32 firstInstance = 0);

    undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;

dictionary GPURenderBundleDescriptor
         : GPUObjectDescriptorBase {
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUCommandsMixin;
GPURenderBundleEncoder includes GPUDebugCommandsMixin;
GPURenderBundleEncoder includes GPUBindingCommandsMixin;
GPURenderBundleEncoder includes GPURenderCommandsMixin;

dictionary GPURenderBundleEncoderDescriptor
         : GPURenderPassLayout {
    boolean depthReadOnly = false;
    boolean stencilReadOnly = false;
};

dictionary GPUQueueDescriptor
         : GPUObjectDescriptorBase {
};

[Exposed=(Window, Worker), SecureContext]
interface GPUQueue {
    undefined submit(sequence<GPUCommandBuffer> commandBuffers);

    Promise<undefined> onSubmittedWorkDone();

    undefined writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        AllowSharedBufferSource data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    undefined writeTexture(
        GPUTexelCopyTextureInfo destination,
        AllowSharedBufferSource data,
        GPUTexelCopyBufferLayout dataLayout,
        GPUExtent3D size);

    undefined copyExternalImageToTexture(
        GPUCopyExternalImageSourceInfo source,
        GPUCopyExternalImageDestInfo destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

[Exposed=(Window, Worker), SecureContext]
interface GPUQuerySet {
    undefined destroy();

    readonly attribute GPUQueryType type;
    readonly attribute GPUSize32Out count;
};
GPUQuerySet includes GPUObjectBase;

dictionary GPUQuerySetDescriptor
         : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
};

enum GPUQueryType {
    "occlusion",
    "timestamp",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUCanvasContext {
    readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;

    undefined configure(GPUCanvasConfiguration configuration);
    undefined unconfigure();

    GPUCanvasConfiguration? getConfiguration();
    GPUTexture getCurrentTexture();
};

enum GPUCanvasAlphaMode {
    "opaque",
    "premultiplied",
};

enum GPUCanvasToneMappingMode {
    "standard",
    "extended",
};

dictionary GPUCanvasToneMapping {
  GPUCanvasToneMappingMode mode = "standard";
};

dictionary GPUCanvasConfiguration {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.RENDER_ATTACHMENT
    sequence<GPUTextureFormat> viewFormats = [];
    PredefinedColorSpace colorSpace = "srgb";
    GPUCanvasToneMapping toneMapping = {};
    GPUCanvasAlphaMode alphaMode = "opaque";
};

enum GPUDeviceLostReason {
    "unknown",
    "destroyed",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUDeviceLostInfo {
    readonly attribute GPUDeviceLostReason reason;
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUError {
    readonly attribute DOMString message;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUValidationError
        : GPUError {
    constructor(DOMString message);
};

[Exposed=(Window, Worker), SecureContext]
interface GPUOutOfMemoryError
        : GPUError {
    constructor(DOMString message);
};

[Exposed=(Window, Worker), SecureContext]
interface GPUInternalError
        : GPUError {
    constructor(DOMString message);
};

enum GPUErrorFilter {
    "validation",
    "out-of-memory",
    "internal",
};

partial interface GPUDevice {
    undefined pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

[Exposed=(Window, Worker), SecureContext]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};

partial interface GPUDevice {
    attribute EventHandler onuncapturederror;
};

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

typedef unsigned long long GPUSize64Out;
typedef unsigned long GPUIntegerCoordinateOut;
typedef unsigned long GPUSize32Out;

typedef unsigned long GPUFlagsConstant;

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;

dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;

dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    GPUIntegerCoordinate height = 1;
    GPUIntegerCoordinate depthOrArrayLayers = 1;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;